May 16 00:12:58.938029 kernel: Linux version 6.6.90-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Thu May 15 22:08:20 -00 2025 May 16 00:12:58.938051 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=5e2f56b68c7f7e65e4df73d074f249f99b5795b677316c47e2ad758e6bd99733 May 16 00:12:58.938062 kernel: BIOS-provided physical RAM map: May 16 00:12:58.938069 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable May 16 00:12:58.938076 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved May 16 00:12:58.938082 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved May 16 00:12:58.938092 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable May 16 00:12:58.938101 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved May 16 00:12:58.938109 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved May 16 00:12:58.938118 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved May 16 00:12:58.938130 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 16 00:12:58.938137 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved May 16 00:12:58.938143 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved May 16 00:12:58.938150 kernel: NX (Execute Disable) protection: active May 16 00:12:58.938158 kernel: APIC: Static calls initialized May 16 00:12:58.938168 kernel: SMBIOS 2.8 present. May 16 00:12:58.938187 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 May 16 00:12:58.938197 kernel: Hypervisor detected: KVM May 16 00:12:58.938206 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 16 00:12:58.938214 kernel: kvm-clock: using sched offset of 3814053734 cycles May 16 00:12:58.938224 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 16 00:12:58.938233 kernel: tsc: Detected 2794.748 MHz processor May 16 00:12:58.938243 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 16 00:12:58.938253 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 16 00:12:58.938262 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 May 16 00:12:58.938275 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs May 16 00:12:58.938284 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 16 00:12:58.938294 kernel: Using GB pages for direct mapping May 16 00:12:58.938301 kernel: ACPI: Early table checksum verification disabled May 16 00:12:58.938308 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) May 16 00:12:58.938316 kernel: ACPI: RSDT 0x000000009CFE2408 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 16 00:12:58.938323 kernel: ACPI: FACP 0x000000009CFE21E8 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 16 00:12:58.938331 kernel: ACPI: DSDT 0x000000009CFE0040 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 16 00:12:58.938338 kernel: ACPI: FACS 0x000000009CFE0000 000040 May 16 00:12:58.938348 kernel: ACPI: APIC 0x000000009CFE22DC 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 16 00:12:58.938367 kernel: ACPI: HPET 0x000000009CFE236C 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 16 00:12:58.938375 kernel: ACPI: MCFG 0x000000009CFE23A4 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 16 00:12:58.938382 kernel: ACPI: WAET 0x000000009CFE23E0 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 16 00:12:58.938390 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21e8-0x9cfe22db] May 16 00:12:58.938397 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21e7] May 16 00:12:58.938409 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] May 16 00:12:58.938419 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22dc-0x9cfe236b] May 16 00:12:58.938426 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe236c-0x9cfe23a3] May 16 00:12:58.938434 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23a4-0x9cfe23df] May 16 00:12:58.938442 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23e0-0x9cfe2407] May 16 00:12:58.938449 kernel: No NUMA configuration found May 16 00:12:58.938457 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] May 16 00:12:58.938465 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] May 16 00:12:58.938475 kernel: Zone ranges: May 16 00:12:58.938482 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 16 00:12:58.938490 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] May 16 00:12:58.938497 kernel: Normal empty May 16 00:12:58.938505 kernel: Movable zone start for each node May 16 00:12:58.938513 kernel: Early memory node ranges May 16 00:12:58.938520 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] May 16 00:12:58.938528 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] May 16 00:12:58.938535 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] May 16 00:12:58.938543 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 16 00:12:58.938553 kernel: On node 0, zone DMA: 97 pages in unavailable ranges May 16 00:12:58.938561 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges May 16 00:12:58.938568 kernel: ACPI: PM-Timer IO Port: 0x608 May 16 00:12:58.938576 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 16 00:12:58.938583 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 16 00:12:58.938591 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 16 00:12:58.938599 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 16 00:12:58.938606 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 16 00:12:58.938614 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 16 00:12:58.938624 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 16 00:12:58.938631 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 16 00:12:58.938639 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 May 16 00:12:58.938646 kernel: TSC deadline timer available May 16 00:12:58.938654 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs May 16 00:12:58.938662 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() May 16 00:12:58.938669 kernel: kvm-guest: KVM setup pv remote TLB flush May 16 00:12:58.938677 kernel: kvm-guest: setup PV sched yield May 16 00:12:58.938685 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices May 16 00:12:58.938694 kernel: Booting paravirtualized kernel on KVM May 16 00:12:58.938702 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 16 00:12:58.938710 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 May 16 00:12:58.938718 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 May 16 00:12:58.938726 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 May 16 00:12:58.938733 kernel: pcpu-alloc: [0] 0 1 2 3 May 16 00:12:58.938741 kernel: kvm-guest: PV spinlocks enabled May 16 00:12:58.938748 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) May 16 00:12:58.938757 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=5e2f56b68c7f7e65e4df73d074f249f99b5795b677316c47e2ad758e6bd99733 May 16 00:12:58.938768 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 16 00:12:58.938775 kernel: random: crng init done May 16 00:12:58.938783 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 16 00:12:58.938791 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 16 00:12:58.938798 kernel: Fallback order for Node 0: 0 May 16 00:12:58.938806 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 May 16 00:12:58.938814 kernel: Policy zone: DMA32 May 16 00:12:58.938821 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 16 00:12:58.938831 kernel: Memory: 2430496K/2571752K available (14336K kernel code, 2296K rwdata, 25068K rodata, 43600K init, 1472K bss, 140996K reserved, 0K cma-reserved) May 16 00:12:58.938839 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 16 00:12:58.938847 kernel: ftrace: allocating 37997 entries in 149 pages May 16 00:12:58.938854 kernel: ftrace: allocated 149 pages with 4 groups May 16 00:12:58.938862 kernel: Dynamic Preempt: voluntary May 16 00:12:58.938870 kernel: rcu: Preemptible hierarchical RCU implementation. May 16 00:12:58.938878 kernel: rcu: RCU event tracing is enabled. May 16 00:12:58.938886 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 16 00:12:58.938894 kernel: Trampoline variant of Tasks RCU enabled. May 16 00:12:58.938904 kernel: Rude variant of Tasks RCU enabled. May 16 00:12:58.938911 kernel: Tracing variant of Tasks RCU enabled. May 16 00:12:58.938919 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 16 00:12:58.938927 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 16 00:12:58.938934 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 May 16 00:12:58.938942 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 16 00:12:58.938950 kernel: Console: colour VGA+ 80x25 May 16 00:12:58.938957 kernel: printk: console [ttyS0] enabled May 16 00:12:58.938965 kernel: ACPI: Core revision 20230628 May 16 00:12:58.938973 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns May 16 00:12:58.938983 kernel: APIC: Switch to symmetric I/O mode setup May 16 00:12:58.938990 kernel: x2apic enabled May 16 00:12:58.938998 kernel: APIC: Switched APIC routing to: physical x2apic May 16 00:12:58.939006 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() May 16 00:12:58.939014 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() May 16 00:12:58.939021 kernel: kvm-guest: setup PV IPIs May 16 00:12:58.939039 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 16 00:12:58.939047 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized May 16 00:12:58.939055 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) May 16 00:12:58.939063 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated May 16 00:12:58.939070 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 May 16 00:12:58.939081 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 May 16 00:12:58.939089 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 16 00:12:58.939096 kernel: Spectre V2 : Mitigation: Retpolines May 16 00:12:58.939105 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT May 16 00:12:58.939113 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls May 16 00:12:58.939123 kernel: RETBleed: Mitigation: untrained return thunk May 16 00:12:58.939131 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 16 00:12:58.939139 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl May 16 00:12:58.939147 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! May 16 00:12:58.939156 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. May 16 00:12:58.939164 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode May 16 00:12:58.939180 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 16 00:12:58.939188 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 16 00:12:58.939199 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 16 00:12:58.939206 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 16 00:12:58.939215 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. May 16 00:12:58.939223 kernel: Freeing SMP alternatives memory: 32K May 16 00:12:58.939231 kernel: pid_max: default: 32768 minimum: 301 May 16 00:12:58.939239 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 16 00:12:58.939247 kernel: landlock: Up and running. May 16 00:12:58.939255 kernel: SELinux: Initializing. May 16 00:12:58.939263 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 16 00:12:58.939273 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 16 00:12:58.939281 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) May 16 00:12:58.939289 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 16 00:12:58.939297 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 16 00:12:58.939305 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 16 00:12:58.939313 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. May 16 00:12:58.939321 kernel: ... version: 0 May 16 00:12:58.939329 kernel: ... bit width: 48 May 16 00:12:58.939337 kernel: ... generic registers: 6 May 16 00:12:58.939347 kernel: ... value mask: 0000ffffffffffff May 16 00:12:58.939416 kernel: ... max period: 00007fffffffffff May 16 00:12:58.939425 kernel: ... fixed-purpose events: 0 May 16 00:12:58.939434 kernel: ... event mask: 000000000000003f May 16 00:12:58.939444 kernel: signal: max sigframe size: 1776 May 16 00:12:58.939452 kernel: rcu: Hierarchical SRCU implementation. May 16 00:12:58.939462 kernel: rcu: Max phase no-delay instances is 400. May 16 00:12:58.939470 kernel: smp: Bringing up secondary CPUs ... May 16 00:12:58.939478 kernel: smpboot: x86: Booting SMP configuration: May 16 00:12:58.939489 kernel: .... node #0, CPUs: #1 #2 #3 May 16 00:12:58.939497 kernel: smp: Brought up 1 node, 4 CPUs May 16 00:12:58.939504 kernel: smpboot: Max logical packages: 1 May 16 00:12:58.939512 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) May 16 00:12:58.939520 kernel: devtmpfs: initialized May 16 00:12:58.939528 kernel: x86/mm: Memory block size: 128MB May 16 00:12:58.939536 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 16 00:12:58.939544 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 16 00:12:58.939552 kernel: pinctrl core: initialized pinctrl subsystem May 16 00:12:58.939562 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 16 00:12:58.939570 kernel: audit: initializing netlink subsys (disabled) May 16 00:12:58.939578 kernel: audit: type=2000 audit(1747354378.169:1): state=initialized audit_enabled=0 res=1 May 16 00:12:58.939586 kernel: thermal_sys: Registered thermal governor 'step_wise' May 16 00:12:58.939594 kernel: thermal_sys: Registered thermal governor 'user_space' May 16 00:12:58.939602 kernel: cpuidle: using governor menu May 16 00:12:58.939610 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 16 00:12:58.939618 kernel: dca service started, version 1.12.1 May 16 00:12:58.939626 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) May 16 00:12:58.939636 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry May 16 00:12:58.939644 kernel: PCI: Using configuration type 1 for base access May 16 00:12:58.939652 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 16 00:12:58.939660 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 16 00:12:58.939668 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page May 16 00:12:58.939676 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 16 00:12:58.939684 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 16 00:12:58.939692 kernel: ACPI: Added _OSI(Module Device) May 16 00:12:58.939700 kernel: ACPI: Added _OSI(Processor Device) May 16 00:12:58.939710 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 16 00:12:58.939718 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 16 00:12:58.939726 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 16 00:12:58.939734 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC May 16 00:12:58.939742 kernel: ACPI: Interpreter enabled May 16 00:12:58.939750 kernel: ACPI: PM: (supports S0 S3 S5) May 16 00:12:58.939758 kernel: ACPI: Using IOAPIC for interrupt routing May 16 00:12:58.939766 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 16 00:12:58.939773 kernel: PCI: Using E820 reservations for host bridge windows May 16 00:12:58.939784 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F May 16 00:12:58.939792 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 16 00:12:58.939977 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 16 00:12:58.940111 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] May 16 00:12:58.940246 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] May 16 00:12:58.940257 kernel: PCI host bridge to bus 0000:00 May 16 00:12:58.940398 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 16 00:12:58.940522 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 16 00:12:58.940637 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 16 00:12:58.940762 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] May 16 00:12:58.940878 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] May 16 00:12:58.940993 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] May 16 00:12:58.941108 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 16 00:12:58.941261 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 May 16 00:12:58.941429 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 May 16 00:12:58.941556 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] May 16 00:12:58.941682 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] May 16 00:12:58.941807 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] May 16 00:12:58.941932 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 16 00:12:58.942067 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 May 16 00:12:58.942208 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] May 16 00:12:58.942336 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] May 16 00:12:58.942485 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] May 16 00:12:58.942621 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 May 16 00:12:58.942748 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] May 16 00:12:58.942873 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] May 16 00:12:58.943000 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] May 16 00:12:58.943140 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 May 16 00:12:58.943277 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] May 16 00:12:58.943422 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] May 16 00:12:58.943550 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] May 16 00:12:58.943676 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] May 16 00:12:58.943810 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 May 16 00:12:58.943937 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO May 16 00:12:58.944075 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 May 16 00:12:58.944211 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] May 16 00:12:58.944338 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] May 16 00:12:58.944487 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 May 16 00:12:58.944620 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] May 16 00:12:58.944633 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 16 00:12:58.944642 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 16 00:12:58.944654 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 16 00:12:58.944662 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 16 00:12:58.944670 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 May 16 00:12:58.944678 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 May 16 00:12:58.944686 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 May 16 00:12:58.944694 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 May 16 00:12:58.944702 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 May 16 00:12:58.944710 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 May 16 00:12:58.944718 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 May 16 00:12:58.944728 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 May 16 00:12:58.944736 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 May 16 00:12:58.944744 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 May 16 00:12:58.944752 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 May 16 00:12:58.944760 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 May 16 00:12:58.944768 kernel: iommu: Default domain type: Translated May 16 00:12:58.944776 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 16 00:12:58.944783 kernel: PCI: Using ACPI for IRQ routing May 16 00:12:58.944791 kernel: PCI: pci_cache_line_size set to 64 bytes May 16 00:12:58.944802 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] May 16 00:12:58.944810 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] May 16 00:12:58.944937 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device May 16 00:12:58.945064 kernel: pci 0000:00:01.0: vgaarb: bridge control possible May 16 00:12:58.945199 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 16 00:12:58.945211 kernel: vgaarb: loaded May 16 00:12:58.945219 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 May 16 00:12:58.945227 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter May 16 00:12:58.945238 kernel: clocksource: Switched to clocksource kvm-clock May 16 00:12:58.945246 kernel: VFS: Disk quotas dquot_6.6.0 May 16 00:12:58.945254 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 16 00:12:58.945262 kernel: pnp: PnP ACPI init May 16 00:12:58.945430 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved May 16 00:12:58.945442 kernel: pnp: PnP ACPI: found 6 devices May 16 00:12:58.945451 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 16 00:12:58.945459 kernel: NET: Registered PF_INET protocol family May 16 00:12:58.945467 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 16 00:12:58.945478 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 16 00:12:58.945487 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 16 00:12:58.945495 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 16 00:12:58.945503 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 16 00:12:58.945511 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 16 00:12:58.945519 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 16 00:12:58.945527 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 16 00:12:58.945535 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 16 00:12:58.945546 kernel: NET: Registered PF_XDP protocol family May 16 00:12:58.945665 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 16 00:12:58.945782 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 16 00:12:58.945899 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 16 00:12:58.946015 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] May 16 00:12:58.946130 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] May 16 00:12:58.946256 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] May 16 00:12:58.946267 kernel: PCI: CLS 0 bytes, default 64 May 16 00:12:58.946275 kernel: Initialise system trusted keyrings May 16 00:12:58.946286 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 16 00:12:58.946294 kernel: Key type asymmetric registered May 16 00:12:58.946302 kernel: Asymmetric key parser 'x509' registered May 16 00:12:58.946310 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) May 16 00:12:58.946318 kernel: io scheduler mq-deadline registered May 16 00:12:58.946326 kernel: io scheduler kyber registered May 16 00:12:58.946334 kernel: io scheduler bfq registered May 16 00:12:58.946342 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 16 00:12:58.946350 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 May 16 00:12:58.946423 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 May 16 00:12:58.946431 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 May 16 00:12:58.946439 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 16 00:12:58.946447 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 16 00:12:58.946455 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 16 00:12:58.946463 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 16 00:12:58.946471 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 16 00:12:58.946603 kernel: rtc_cmos 00:04: RTC can wake from S4 May 16 00:12:58.946619 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 16 00:12:58.946735 kernel: rtc_cmos 00:04: registered as rtc0 May 16 00:12:58.946852 kernel: rtc_cmos 00:04: setting system clock to 2025-05-16T00:12:58 UTC (1747354378) May 16 00:12:58.946970 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs May 16 00:12:58.946981 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled May 16 00:12:58.946989 kernel: NET: Registered PF_INET6 protocol family May 16 00:12:58.946997 kernel: Segment Routing with IPv6 May 16 00:12:58.947004 kernel: In-situ OAM (IOAM) with IPv6 May 16 00:12:58.947013 kernel: NET: Registered PF_PACKET protocol family May 16 00:12:58.947024 kernel: Key type dns_resolver registered May 16 00:12:58.947032 kernel: IPI shorthand broadcast: enabled May 16 00:12:58.947040 kernel: sched_clock: Marking stable (674001937, 122357518)->(814575781, -18216326) May 16 00:12:58.947048 kernel: registered taskstats version 1 May 16 00:12:58.947056 kernel: Loading compiled-in X.509 certificates May 16 00:12:58.947064 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.90-flatcar: 36d9e3bf63b9b28466bcfa7a508d814673a33a26' May 16 00:12:58.947072 kernel: Key type .fscrypt registered May 16 00:12:58.947079 kernel: Key type fscrypt-provisioning registered May 16 00:12:58.947087 kernel: ima: No TPM chip found, activating TPM-bypass! May 16 00:12:58.947098 kernel: ima: Allocated hash algorithm: sha1 May 16 00:12:58.947106 kernel: ima: No architecture policies found May 16 00:12:58.947113 kernel: clk: Disabling unused clocks May 16 00:12:58.947121 kernel: Freeing unused kernel image (initmem) memory: 43600K May 16 00:12:58.947129 kernel: Write protecting the kernel read-only data: 40960k May 16 00:12:58.947137 kernel: Freeing unused kernel image (rodata/data gap) memory: 1556K May 16 00:12:58.947145 kernel: Run /init as init process May 16 00:12:58.947153 kernel: with arguments: May 16 00:12:58.947163 kernel: /init May 16 00:12:58.947179 kernel: with environment: May 16 00:12:58.947187 kernel: HOME=/ May 16 00:12:58.947195 kernel: TERM=linux May 16 00:12:58.947202 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 16 00:12:58.947212 systemd[1]: Successfully made /usr/ read-only. May 16 00:12:58.947223 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 16 00:12:58.947233 systemd[1]: Detected virtualization kvm. May 16 00:12:58.947244 systemd[1]: Detected architecture x86-64. May 16 00:12:58.947252 systemd[1]: Running in initrd. May 16 00:12:58.947260 systemd[1]: No hostname configured, using default hostname. May 16 00:12:58.947269 systemd[1]: Hostname set to . May 16 00:12:58.947278 systemd[1]: Initializing machine ID from VM UUID. May 16 00:12:58.947286 systemd[1]: Queued start job for default target initrd.target. May 16 00:12:58.947295 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 16 00:12:58.947304 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 16 00:12:58.947316 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 16 00:12:58.947336 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 16 00:12:58.947347 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 16 00:12:58.947369 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 16 00:12:58.947379 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 16 00:12:58.947391 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 16 00:12:58.947400 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 16 00:12:58.947409 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 16 00:12:58.947417 systemd[1]: Reached target paths.target - Path Units. May 16 00:12:58.947426 systemd[1]: Reached target slices.target - Slice Units. May 16 00:12:58.947435 systemd[1]: Reached target swap.target - Swaps. May 16 00:12:58.947443 systemd[1]: Reached target timers.target - Timer Units. May 16 00:12:58.947462 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 16 00:12:58.947475 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 16 00:12:58.947483 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 16 00:12:58.947492 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 16 00:12:58.947501 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 16 00:12:58.947510 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 16 00:12:58.947527 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 16 00:12:58.947536 systemd[1]: Reached target sockets.target - Socket Units. May 16 00:12:58.947552 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 16 00:12:58.947568 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 16 00:12:58.947594 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 16 00:12:58.947604 systemd[1]: Starting systemd-fsck-usr.service... May 16 00:12:58.947613 systemd[1]: Starting systemd-journald.service - Journal Service... May 16 00:12:58.947629 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 16 00:12:58.947645 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 16 00:12:58.947655 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 16 00:12:58.947663 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 16 00:12:58.947675 systemd[1]: Finished systemd-fsck-usr.service. May 16 00:12:58.947685 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 16 00:12:58.947717 systemd-journald[195]: Collecting audit messages is disabled. May 16 00:12:58.947750 systemd-journald[195]: Journal started May 16 00:12:58.947778 systemd-journald[195]: Runtime Journal (/run/log/journal/b2571fbb63064803b11881727d352126) is 6M, max 48.3M, 42.3M free. May 16 00:12:58.948431 systemd[1]: Started systemd-journald.service - Journal Service. May 16 00:12:58.949709 systemd-modules-load[196]: Inserted module 'overlay' May 16 00:12:58.972237 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 16 00:12:58.980734 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 16 00:12:59.010503 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 16 00:12:59.021394 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 16 00:12:59.023142 systemd-modules-load[196]: Inserted module 'br_netfilter' May 16 00:12:59.024212 kernel: Bridge firewalling registered May 16 00:12:59.026770 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 16 00:12:59.029417 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 16 00:12:59.033899 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 16 00:12:59.037467 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 16 00:12:59.043555 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 16 00:12:59.047564 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 16 00:12:59.051526 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 16 00:12:59.054815 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 16 00:12:59.055165 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 16 00:12:59.064272 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 16 00:12:59.099018 dracut-cmdline[225]: dracut-dracut-053 May 16 00:12:59.101940 dracut-cmdline[225]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=5e2f56b68c7f7e65e4df73d074f249f99b5795b677316c47e2ad758e6bd99733 May 16 00:12:59.127460 systemd-resolved[229]: Positive Trust Anchors: May 16 00:12:59.127479 systemd-resolved[229]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 16 00:12:59.127519 systemd-resolved[229]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 16 00:12:59.130488 systemd-resolved[229]: Defaulting to hostname 'linux'. May 16 00:12:59.131823 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 16 00:12:59.138327 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 16 00:12:59.212408 kernel: SCSI subsystem initialized May 16 00:12:59.224402 kernel: Loading iSCSI transport class v2.0-870. May 16 00:12:59.235392 kernel: iscsi: registered transport (tcp) May 16 00:12:59.256519 kernel: iscsi: registered transport (qla4xxx) May 16 00:12:59.256607 kernel: QLogic iSCSI HBA Driver May 16 00:12:59.308631 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 16 00:12:59.312591 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 16 00:12:59.368412 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 16 00:12:59.368506 kernel: device-mapper: uevent: version 1.0.3 May 16 00:12:59.368524 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 16 00:12:59.418398 kernel: raid6: avx2x4 gen() 18682 MB/s May 16 00:12:59.435391 kernel: raid6: avx2x2 gen() 26131 MB/s May 16 00:12:59.452658 kernel: raid6: avx2x1 gen() 23620 MB/s May 16 00:12:59.452701 kernel: raid6: using algorithm avx2x2 gen() 26131 MB/s May 16 00:12:59.470479 kernel: raid6: .... xor() 19109 MB/s, rmw enabled May 16 00:12:59.470524 kernel: raid6: using avx2x2 recovery algorithm May 16 00:12:59.491396 kernel: xor: automatically using best checksumming function avx May 16 00:12:59.639398 kernel: Btrfs loaded, zoned=no, fsverity=no May 16 00:12:59.652708 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 16 00:12:59.675875 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 16 00:12:59.712747 systemd-udevd[414]: Using default interface naming scheme 'v255'. May 16 00:12:59.721597 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 16 00:12:59.730963 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 16 00:12:59.792707 dracut-pre-trigger[426]: rd.md=0: removing MD RAID activation May 16 00:12:59.863907 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 16 00:12:59.868962 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 16 00:13:00.008961 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 16 00:13:00.015255 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 16 00:13:00.043717 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 16 00:13:00.047213 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 16 00:13:00.050781 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 16 00:13:00.053559 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 16 00:13:00.059538 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 16 00:13:00.065410 kernel: cryptd: max_cpu_qlen set to 1000 May 16 00:13:00.073426 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues May 16 00:13:00.079037 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 16 00:13:00.087064 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 16 00:13:00.087176 kernel: GPT:9289727 != 19775487 May 16 00:13:00.087249 kernel: GPT:Alternate GPT header not at the end of the disk. May 16 00:13:00.087269 kernel: GPT:9289727 != 19775487 May 16 00:13:00.087283 kernel: GPT: Use GNU Parted to correct GPT errors. May 16 00:13:00.088196 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 16 00:13:00.089699 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 16 00:13:00.099397 kernel: libata version 3.00 loaded. May 16 00:13:00.108400 kernel: AVX2 version of gcm_enc/dec engaged. May 16 00:13:00.108464 kernel: AES CTR mode by8 optimization enabled May 16 00:13:00.108480 kernel: ahci 0000:00:1f.2: version 3.0 May 16 00:13:00.110387 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 May 16 00:13:00.114636 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode May 16 00:13:00.114923 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only May 16 00:13:00.115925 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 16 00:13:00.116090 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 16 00:13:00.120928 kernel: scsi host0: ahci May 16 00:13:00.122442 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 16 00:13:00.126124 kernel: scsi host1: ahci May 16 00:13:00.124240 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 16 00:13:00.124511 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 16 00:13:00.136025 kernel: scsi host2: ahci May 16 00:13:00.136298 kernel: scsi host3: ahci May 16 00:13:00.136533 kernel: scsi host4: ahci May 16 00:13:00.130245 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 16 00:13:00.161506 kernel: scsi host5: ahci May 16 00:13:00.165299 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 May 16 00:13:00.165349 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 May 16 00:13:00.165379 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 May 16 00:13:00.165396 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 May 16 00:13:00.165410 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 May 16 00:13:00.165424 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 May 16 00:13:00.137787 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 16 00:13:00.165928 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 16 00:13:00.186404 kernel: BTRFS: device fsid a728581e-9e7f-4655-895a-4f66e17e3645 devid 1 transid 40 /dev/vda3 scanned by (udev-worker) (469) May 16 00:13:00.191390 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (478) May 16 00:13:00.221040 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 16 00:13:00.242760 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 16 00:13:00.245846 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 16 00:13:00.258844 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 16 00:13:00.260468 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 16 00:13:00.273384 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 16 00:13:00.277706 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 16 00:13:00.284179 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 16 00:13:00.308595 disk-uuid[570]: Primary Header is updated. May 16 00:13:00.308595 disk-uuid[570]: Secondary Entries is updated. May 16 00:13:00.308595 disk-uuid[570]: Secondary Header is updated. May 16 00:13:00.318421 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 16 00:13:00.316003 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 16 00:13:00.477440 kernel: ata5: SATA link down (SStatus 0 SControl 300) May 16 00:13:00.479453 kernel: ata1: SATA link down (SStatus 0 SControl 300) May 16 00:13:00.479541 kernel: ata6: SATA link down (SStatus 0 SControl 300) May 16 00:13:00.482422 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) May 16 00:13:00.485529 kernel: ata4: SATA link down (SStatus 0 SControl 300) May 16 00:13:00.485741 kernel: ata2: SATA link down (SStatus 0 SControl 300) May 16 00:13:00.485756 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 May 16 00:13:00.486525 kernel: ata3.00: applying bridge limits May 16 00:13:00.487894 kernel: ata3.00: configured for UDMA/100 May 16 00:13:00.490414 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 May 16 00:13:00.539648 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray May 16 00:13:00.540078 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 May 16 00:13:00.558441 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 May 16 00:13:01.347405 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 16 00:13:01.353652 disk-uuid[578]: The operation has completed successfully. May 16 00:13:01.436859 systemd[1]: disk-uuid.service: Deactivated successfully. May 16 00:13:01.438048 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 16 00:13:01.521276 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 16 00:13:01.582533 sh[595]: Success May 16 00:13:01.611181 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" May 16 00:13:01.688905 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 16 00:13:01.707899 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 16 00:13:01.720087 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 16 00:13:01.765909 kernel: BTRFS info (device dm-0): first mount of filesystem a728581e-9e7f-4655-895a-4f66e17e3645 May 16 00:13:01.766000 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 16 00:13:01.766017 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 16 00:13:01.766049 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 16 00:13:01.766964 kernel: BTRFS info (device dm-0): using free space tree May 16 00:13:01.804641 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 16 00:13:01.807975 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 16 00:13:01.817813 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 16 00:13:01.822661 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 16 00:13:01.858301 kernel: BTRFS info (device vda6): first mount of filesystem 206158fa-d3b7-4891-accd-2db768e6ca22 May 16 00:13:01.858402 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 16 00:13:01.858427 kernel: BTRFS info (device vda6): using free space tree May 16 00:13:01.876826 kernel: BTRFS info (device vda6): auto enabling async discard May 16 00:13:01.888911 kernel: BTRFS info (device vda6): last unmount of filesystem 206158fa-d3b7-4891-accd-2db768e6ca22 May 16 00:13:02.006355 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 16 00:13:02.008489 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 16 00:13:02.079549 systemd-networkd[771]: lo: Link UP May 16 00:13:02.080120 systemd-networkd[771]: lo: Gained carrier May 16 00:13:02.082495 systemd-networkd[771]: Enumeration completed May 16 00:13:02.082973 systemd-networkd[771]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 16 00:13:02.082979 systemd-networkd[771]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 16 00:13:02.107669 systemd[1]: Started systemd-networkd.service - Network Configuration. May 16 00:13:02.108022 systemd-networkd[771]: eth0: Link UP May 16 00:13:02.108027 systemd-networkd[771]: eth0: Gained carrier May 16 00:13:02.108041 systemd-networkd[771]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 16 00:13:02.172632 systemd[1]: Reached target network.target - Network. May 16 00:13:02.222486 systemd-networkd[771]: eth0: DHCPv4 address 10.0.0.81/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 16 00:13:02.319606 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 16 00:13:02.330562 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 16 00:13:02.856544 ignition[776]: Ignition 2.20.0 May 16 00:13:02.856564 ignition[776]: Stage: fetch-offline May 16 00:13:02.856623 ignition[776]: no configs at "/usr/lib/ignition/base.d" May 16 00:13:02.856635 ignition[776]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 16 00:13:02.856768 ignition[776]: parsed url from cmdline: "" May 16 00:13:02.856774 ignition[776]: no config URL provided May 16 00:13:02.856781 ignition[776]: reading system config file "/usr/lib/ignition/user.ign" May 16 00:13:02.861670 ignition[776]: no config at "/usr/lib/ignition/user.ign" May 16 00:13:02.861746 ignition[776]: op(1): [started] loading QEMU firmware config module May 16 00:13:02.861754 ignition[776]: op(1): executing: "modprobe" "qemu_fw_cfg" May 16 00:13:02.889785 ignition[776]: op(1): [finished] loading QEMU firmware config module May 16 00:13:02.933483 ignition[776]: parsing config with SHA512: e54936b3192a15fbfdcd5ed0755cbaea923ef4e9d352bc58f948227dd0564dc8145d1289a62c416a9331a2603987f85dcc25fc721231ad971c884000959c3744 May 16 00:13:02.944506 unknown[776]: fetched base config from "system" May 16 00:13:02.944526 unknown[776]: fetched user config from "qemu" May 16 00:13:02.945076 ignition[776]: fetch-offline: fetch-offline passed May 16 00:13:02.945183 ignition[776]: Ignition finished successfully May 16 00:13:02.952095 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 16 00:13:02.954602 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 16 00:13:02.956120 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 16 00:13:03.125212 ignition[786]: Ignition 2.20.0 May 16 00:13:03.125233 ignition[786]: Stage: kargs May 16 00:13:03.125448 ignition[786]: no configs at "/usr/lib/ignition/base.d" May 16 00:13:03.125463 ignition[786]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 16 00:13:03.168156 ignition[786]: kargs: kargs passed May 16 00:13:03.168249 ignition[786]: Ignition finished successfully May 16 00:13:03.173453 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 16 00:13:03.176312 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 16 00:13:03.229525 ignition[795]: Ignition 2.20.0 May 16 00:13:03.229537 ignition[795]: Stage: disks May 16 00:13:03.229700 ignition[795]: no configs at "/usr/lib/ignition/base.d" May 16 00:13:03.229713 ignition[795]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 16 00:13:03.230535 ignition[795]: disks: disks passed May 16 00:13:03.230580 ignition[795]: Ignition finished successfully May 16 00:13:03.237239 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 16 00:13:03.237618 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 16 00:13:03.240787 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 16 00:13:03.243230 systemd[1]: Reached target local-fs.target - Local File Systems. May 16 00:13:03.243610 systemd[1]: Reached target sysinit.target - System Initialization. May 16 00:13:03.247472 systemd[1]: Reached target basic.target - Basic System. May 16 00:13:03.248821 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 16 00:13:03.276647 systemd-fsck[806]: ROOT: clean, 14/553520 files, 52654/553472 blocks May 16 00:13:03.469609 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 16 00:13:03.471100 systemd[1]: Mounting sysroot.mount - /sysroot... May 16 00:13:03.604430 kernel: EXT4-fs (vda9): mounted filesystem f27adc75-a467-4bfb-9c02-79a2879452a3 r/w with ordered data mode. Quota mode: none. May 16 00:13:03.605204 systemd[1]: Mounted sysroot.mount - /sysroot. May 16 00:13:03.606016 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 16 00:13:03.609917 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 16 00:13:03.612316 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 16 00:13:03.612765 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 16 00:13:03.612819 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 16 00:13:03.612851 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 16 00:13:03.629736 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 16 00:13:03.635623 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (815) May 16 00:13:03.635653 kernel: BTRFS info (device vda6): first mount of filesystem 206158fa-d3b7-4891-accd-2db768e6ca22 May 16 00:13:03.635676 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 16 00:13:03.635696 kernel: BTRFS info (device vda6): using free space tree May 16 00:13:03.636486 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 16 00:13:03.639840 kernel: BTRFS info (device vda6): auto enabling async discard May 16 00:13:03.640434 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 16 00:13:03.734056 initrd-setup-root[839]: cut: /sysroot/etc/passwd: No such file or directory May 16 00:13:03.736469 systemd-networkd[771]: eth0: Gained IPv6LL May 16 00:13:03.740427 initrd-setup-root[846]: cut: /sysroot/etc/group: No such file or directory May 16 00:13:03.745261 initrd-setup-root[853]: cut: /sysroot/etc/shadow: No such file or directory May 16 00:13:03.750103 initrd-setup-root[860]: cut: /sysroot/etc/gshadow: No such file or directory May 16 00:13:03.866528 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 16 00:13:03.869451 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 16 00:13:03.871715 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 16 00:13:03.934170 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 16 00:13:03.936905 kernel: BTRFS info (device vda6): last unmount of filesystem 206158fa-d3b7-4891-accd-2db768e6ca22 May 16 00:13:03.951320 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 16 00:13:04.040191 ignition[931]: INFO : Ignition 2.20.0 May 16 00:13:04.040191 ignition[931]: INFO : Stage: mount May 16 00:13:04.042493 ignition[931]: INFO : no configs at "/usr/lib/ignition/base.d" May 16 00:13:04.042493 ignition[931]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 16 00:13:04.042493 ignition[931]: INFO : mount: mount passed May 16 00:13:04.042493 ignition[931]: INFO : Ignition finished successfully May 16 00:13:04.049476 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 16 00:13:04.052151 systemd[1]: Starting ignition-files.service - Ignition (files)... May 16 00:13:04.607108 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 16 00:13:04.644620 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (942) May 16 00:13:04.644733 kernel: BTRFS info (device vda6): first mount of filesystem 206158fa-d3b7-4891-accd-2db768e6ca22 May 16 00:13:04.644749 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 16 00:13:04.645808 kernel: BTRFS info (device vda6): using free space tree May 16 00:13:04.650397 kernel: BTRFS info (device vda6): auto enabling async discard May 16 00:13:04.651894 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 16 00:13:04.684969 ignition[959]: INFO : Ignition 2.20.0 May 16 00:13:04.684969 ignition[959]: INFO : Stage: files May 16 00:13:04.687475 ignition[959]: INFO : no configs at "/usr/lib/ignition/base.d" May 16 00:13:04.687475 ignition[959]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 16 00:13:04.687475 ignition[959]: DEBUG : files: compiled without relabeling support, skipping May 16 00:13:04.692196 ignition[959]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 16 00:13:04.692196 ignition[959]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 16 00:13:04.692196 ignition[959]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 16 00:13:04.692196 ignition[959]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 16 00:13:04.692196 ignition[959]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 16 00:13:04.691670 unknown[959]: wrote ssh authorized keys file for user: core May 16 00:13:04.703160 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" May 16 00:13:04.703160 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 May 16 00:13:04.762616 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 16 00:13:05.245425 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" May 16 00:13:05.245425 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" May 16 00:13:05.259507 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" May 16 00:13:05.259507 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" May 16 00:13:05.259507 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" May 16 00:13:05.259507 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 16 00:13:05.259507 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 16 00:13:05.259507 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 16 00:13:05.259507 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 16 00:13:05.275893 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" May 16 00:13:05.275893 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 16 00:13:05.275893 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" May 16 00:13:05.275893 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" May 16 00:13:05.275893 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" May 16 00:13:05.275893 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 May 16 00:13:05.822962 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK May 16 00:13:06.607662 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" May 16 00:13:06.607662 ignition[959]: INFO : files: op(b): [started] processing unit "prepare-helm.service" May 16 00:13:06.612223 ignition[959]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 16 00:13:06.612223 ignition[959]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 16 00:13:06.612223 ignition[959]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" May 16 00:13:06.612223 ignition[959]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" May 16 00:13:06.612223 ignition[959]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 16 00:13:06.612223 ignition[959]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 16 00:13:06.612223 ignition[959]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" May 16 00:13:06.612223 ignition[959]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" May 16 00:13:06.632609 ignition[959]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" May 16 00:13:06.636918 ignition[959]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 16 00:13:06.638786 ignition[959]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" May 16 00:13:06.638786 ignition[959]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" May 16 00:13:06.638786 ignition[959]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" May 16 00:13:06.638786 ignition[959]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" May 16 00:13:06.638786 ignition[959]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" May 16 00:13:06.638786 ignition[959]: INFO : files: files passed May 16 00:13:06.638786 ignition[959]: INFO : Ignition finished successfully May 16 00:13:06.639611 systemd[1]: Finished ignition-files.service - Ignition (files). May 16 00:13:06.643589 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 16 00:13:06.647612 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 16 00:13:06.656566 systemd[1]: ignition-quench.service: Deactivated successfully. May 16 00:13:06.656698 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 16 00:13:06.660254 initrd-setup-root-after-ignition[988]: grep: /sysroot/oem/oem-release: No such file or directory May 16 00:13:06.662083 initrd-setup-root-after-ignition[991]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 16 00:13:06.662083 initrd-setup-root-after-ignition[991]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 16 00:13:06.665228 initrd-setup-root-after-ignition[995]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 16 00:13:06.669098 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 16 00:13:06.672227 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 16 00:13:06.676575 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 16 00:13:06.745154 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 16 00:13:06.746558 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 16 00:13:06.750175 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 16 00:13:06.752832 systemd[1]: Reached target initrd.target - Initrd Default Target. May 16 00:13:06.755578 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 16 00:13:06.758672 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 16 00:13:06.786981 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 16 00:13:06.791737 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 16 00:13:06.820284 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 16 00:13:06.823846 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 16 00:13:06.826429 systemd[1]: Stopped target timers.target - Timer Units. May 16 00:13:06.828372 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 16 00:13:06.829444 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 16 00:13:06.832174 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 16 00:13:06.838350 systemd[1]: Stopped target basic.target - Basic System. May 16 00:13:06.840466 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 16 00:13:06.842831 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 16 00:13:06.845456 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 16 00:13:06.848031 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 16 00:13:06.850294 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 16 00:13:06.853132 systemd[1]: Stopped target sysinit.target - System Initialization. May 16 00:13:06.855369 systemd[1]: Stopped target local-fs.target - Local File Systems. May 16 00:13:06.857520 systemd[1]: Stopped target swap.target - Swaps. May 16 00:13:06.859280 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 16 00:13:06.860480 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 16 00:13:06.862898 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 16 00:13:06.865213 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 16 00:13:06.867659 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 16 00:13:06.868917 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 16 00:13:06.871945 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 16 00:13:06.873178 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 16 00:13:06.875848 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 16 00:13:06.877392 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 16 00:13:06.880169 systemd[1]: Stopped target paths.target - Path Units. May 16 00:13:06.882472 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 16 00:13:06.886480 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 16 00:13:06.889752 systemd[1]: Stopped target slices.target - Slice Units. May 16 00:13:06.891970 systemd[1]: Stopped target sockets.target - Socket Units. May 16 00:13:06.894232 systemd[1]: iscsid.socket: Deactivated successfully. May 16 00:13:06.895329 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 16 00:13:06.897810 systemd[1]: iscsiuio.socket: Deactivated successfully. May 16 00:13:06.898869 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 16 00:13:06.901323 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 16 00:13:06.902790 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 16 00:13:06.906054 systemd[1]: ignition-files.service: Deactivated successfully. May 16 00:13:06.907240 systemd[1]: Stopped ignition-files.service - Ignition (files). May 16 00:13:06.911107 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 16 00:13:06.914683 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 16 00:13:06.916893 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 16 00:13:06.918286 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 16 00:13:06.921194 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 16 00:13:06.922479 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 16 00:13:06.930647 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 16 00:13:06.931965 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 16 00:13:06.949791 ignition[1015]: INFO : Ignition 2.20.0 May 16 00:13:06.949791 ignition[1015]: INFO : Stage: umount May 16 00:13:06.952106 ignition[1015]: INFO : no configs at "/usr/lib/ignition/base.d" May 16 00:13:06.952106 ignition[1015]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 16 00:13:06.952106 ignition[1015]: INFO : umount: umount passed May 16 00:13:06.952106 ignition[1015]: INFO : Ignition finished successfully May 16 00:13:06.953511 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 16 00:13:06.960077 systemd[1]: ignition-mount.service: Deactivated successfully. May 16 00:13:06.960268 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 16 00:13:06.963992 systemd[1]: Stopped target network.target - Network. May 16 00:13:06.966970 systemd[1]: ignition-disks.service: Deactivated successfully. May 16 00:13:06.967081 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 16 00:13:06.969803 systemd[1]: ignition-kargs.service: Deactivated successfully. May 16 00:13:06.969878 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 16 00:13:06.972850 systemd[1]: ignition-setup.service: Deactivated successfully. May 16 00:13:06.972973 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 16 00:13:06.975482 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 16 00:13:06.975547 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 16 00:13:06.978100 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 16 00:13:06.981006 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 16 00:13:06.983961 systemd[1]: systemd-resolved.service: Deactivated successfully. May 16 00:13:06.984137 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 16 00:13:06.989386 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 16 00:13:06.990249 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 16 00:13:06.990396 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 16 00:13:06.994752 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 16 00:13:06.995154 systemd[1]: systemd-networkd.service: Deactivated successfully. May 16 00:13:06.995318 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 16 00:13:06.998376 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 16 00:13:06.999213 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 16 00:13:06.999300 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 16 00:13:07.002355 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 16 00:13:07.004566 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 16 00:13:07.004645 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 16 00:13:07.007489 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 16 00:13:07.007561 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 16 00:13:07.010810 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 16 00:13:07.010879 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 16 00:13:07.013481 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 16 00:13:07.017713 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 16 00:13:07.045865 systemd[1]: systemd-udevd.service: Deactivated successfully. May 16 00:13:07.046158 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 16 00:13:07.049275 systemd[1]: network-cleanup.service: Deactivated successfully. May 16 00:13:07.049419 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 16 00:13:07.052049 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 16 00:13:07.052142 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 16 00:13:07.053445 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 16 00:13:07.053487 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 16 00:13:07.055565 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 16 00:13:07.055620 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 16 00:13:07.069874 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 16 00:13:07.069995 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 16 00:13:07.071420 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 16 00:13:07.071479 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 16 00:13:07.074646 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 16 00:13:07.075785 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 16 00:13:07.075857 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 16 00:13:07.078527 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. May 16 00:13:07.078582 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 16 00:13:07.081066 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 16 00:13:07.081123 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 16 00:13:07.083683 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 16 00:13:07.083754 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 16 00:13:07.099204 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 16 00:13:07.099376 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 16 00:13:07.515700 systemd[1]: sysroot-boot.service: Deactivated successfully. May 16 00:13:07.515889 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 16 00:13:07.519052 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 16 00:13:07.521165 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 16 00:13:07.521254 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 16 00:13:07.525118 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 16 00:13:07.544702 systemd[1]: Switching root. May 16 00:13:07.579380 systemd-journald[195]: Received SIGTERM from PID 1 (systemd). May 16 00:13:07.579444 systemd-journald[195]: Journal stopped May 16 00:13:09.612558 kernel: SELinux: policy capability network_peer_controls=1 May 16 00:13:09.612630 kernel: SELinux: policy capability open_perms=1 May 16 00:13:09.612642 kernel: SELinux: policy capability extended_socket_class=1 May 16 00:13:09.612658 kernel: SELinux: policy capability always_check_network=0 May 16 00:13:09.612669 kernel: SELinux: policy capability cgroup_seclabel=1 May 16 00:13:09.612681 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 16 00:13:09.612705 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 16 00:13:09.612716 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 16 00:13:09.612728 kernel: audit: type=1403 audit(1747354388.557:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 16 00:13:09.612741 systemd[1]: Successfully loaded SELinux policy in 81.669ms. May 16 00:13:09.612763 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 14.164ms. May 16 00:13:09.612781 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 16 00:13:09.612794 systemd[1]: Detected virtualization kvm. May 16 00:13:09.612806 systemd[1]: Detected architecture x86-64. May 16 00:13:09.612818 systemd[1]: Detected first boot. May 16 00:13:09.612831 systemd[1]: Initializing machine ID from VM UUID. May 16 00:13:09.612843 zram_generator::config[1063]: No configuration found. May 16 00:13:09.612856 kernel: Guest personality initialized and is inactive May 16 00:13:09.612868 kernel: VMCI host device registered (name=vmci, major=10, minor=125) May 16 00:13:09.612892 kernel: Initialized host personality May 16 00:13:09.612906 kernel: NET: Registered PF_VSOCK protocol family May 16 00:13:09.612918 systemd[1]: Populated /etc with preset unit settings. May 16 00:13:09.612931 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 16 00:13:09.612944 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 16 00:13:09.612956 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 16 00:13:09.612969 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 16 00:13:09.612982 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 16 00:13:09.612995 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 16 00:13:09.613008 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 16 00:13:09.613024 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 16 00:13:09.613037 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 16 00:13:09.613050 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 16 00:13:09.613063 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 16 00:13:09.613075 systemd[1]: Created slice user.slice - User and Session Slice. May 16 00:13:09.613087 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 16 00:13:09.613100 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 16 00:13:09.613113 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 16 00:13:09.613134 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 16 00:13:09.613147 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 16 00:13:09.613160 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 16 00:13:09.613172 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 16 00:13:09.613185 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 16 00:13:09.613198 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 16 00:13:09.613210 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 16 00:13:09.613223 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 16 00:13:09.613238 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 16 00:13:09.613250 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 16 00:13:09.613263 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 16 00:13:09.613275 systemd[1]: Reached target slices.target - Slice Units. May 16 00:13:09.613287 systemd[1]: Reached target swap.target - Swaps. May 16 00:13:09.613299 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 16 00:13:09.613312 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 16 00:13:09.613324 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 16 00:13:09.613336 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 16 00:13:09.613351 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 16 00:13:09.613377 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 16 00:13:09.613389 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 16 00:13:09.613402 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 16 00:13:09.613414 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 16 00:13:09.613427 systemd[1]: Mounting media.mount - External Media Directory... May 16 00:13:09.613439 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 16 00:13:09.613452 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 16 00:13:09.613464 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 16 00:13:09.613479 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 16 00:13:09.613492 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 16 00:13:09.613505 systemd[1]: Reached target machines.target - Containers. May 16 00:13:09.613517 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 16 00:13:09.613530 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 16 00:13:09.613542 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 16 00:13:09.613554 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 16 00:13:09.613567 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 16 00:13:09.613583 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 16 00:13:09.613595 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 16 00:13:09.613608 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 16 00:13:09.613620 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 16 00:13:09.613633 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 16 00:13:09.613646 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 16 00:13:09.613658 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 16 00:13:09.613670 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 16 00:13:09.613687 systemd[1]: Stopped systemd-fsck-usr.service. May 16 00:13:09.613703 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 16 00:13:09.613715 kernel: loop: module loaded May 16 00:13:09.613727 systemd[1]: Starting systemd-journald.service - Journal Service... May 16 00:13:09.613739 kernel: fuse: init (API version 7.39) May 16 00:13:09.613751 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 16 00:13:09.613763 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 16 00:13:09.613776 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 16 00:13:09.613788 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 16 00:13:09.613803 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 16 00:13:09.613816 systemd[1]: verity-setup.service: Deactivated successfully. May 16 00:13:09.613828 systemd[1]: Stopped verity-setup.service. May 16 00:13:09.613841 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 16 00:13:09.613853 kernel: ACPI: bus type drm_connector registered May 16 00:13:09.613867 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 16 00:13:09.613891 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 16 00:13:09.613904 systemd[1]: Mounted media.mount - External Media Directory. May 16 00:13:09.613916 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 16 00:13:09.613928 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 16 00:13:09.613941 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 16 00:13:09.613953 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 16 00:13:09.613965 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 16 00:13:09.613978 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 16 00:13:09.613994 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 16 00:13:09.614006 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 16 00:13:09.614019 systemd[1]: modprobe@drm.service: Deactivated successfully. May 16 00:13:09.614031 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 16 00:13:09.614044 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 16 00:13:09.614077 systemd-journald[1127]: Collecting audit messages is disabled. May 16 00:13:09.614102 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 16 00:13:09.614117 systemd-journald[1127]: Journal started May 16 00:13:09.614143 systemd-journald[1127]: Runtime Journal (/run/log/journal/b2571fbb63064803b11881727d352126) is 6M, max 48.3M, 42.3M free. May 16 00:13:09.614182 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 16 00:13:09.239173 systemd[1]: Queued start job for default target multi-user.target. May 16 00:13:09.255395 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 16 00:13:09.256459 systemd[1]: systemd-journald.service: Deactivated successfully. May 16 00:13:09.618403 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 16 00:13:09.618438 systemd[1]: Started systemd-journald.service - Journal Service. May 16 00:13:09.632852 systemd[1]: modprobe@loop.service: Deactivated successfully. May 16 00:13:09.633464 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 16 00:13:09.635622 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 16 00:13:09.642699 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 16 00:13:09.644985 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 16 00:13:09.647308 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 16 00:13:09.664883 systemd[1]: Reached target network-pre.target - Preparation for Network. May 16 00:13:09.668420 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 16 00:13:09.673443 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 16 00:13:09.675024 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 16 00:13:09.675076 systemd[1]: Reached target local-fs.target - Local File Systems. May 16 00:13:09.677619 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 16 00:13:09.678971 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 16 00:13:09.693538 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 16 00:13:09.695153 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 16 00:13:09.697313 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 16 00:13:09.701177 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 16 00:13:09.704823 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 16 00:13:09.706773 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 16 00:13:09.708245 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 16 00:13:09.711007 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 16 00:13:09.716482 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 16 00:13:09.718234 systemd-journald[1127]: Time spent on flushing to /var/log/journal/b2571fbb63064803b11881727d352126 is 12.845ms for 961 entries. May 16 00:13:09.718234 systemd-journald[1127]: System Journal (/var/log/journal/b2571fbb63064803b11881727d352126) is 8M, max 195.6M, 187.6M free. May 16 00:13:09.947648 systemd-journald[1127]: Received client request to flush runtime journal. May 16 00:13:09.947786 kernel: loop0: detected capacity change from 0 to 109808 May 16 00:13:09.948019 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 16 00:13:09.948099 kernel: loop1: detected capacity change from 0 to 224512 May 16 00:13:09.948124 kernel: loop2: detected capacity change from 0 to 151640 May 16 00:13:09.723227 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 16 00:13:09.726708 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 16 00:13:09.734197 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 16 00:13:09.735615 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 16 00:13:09.736903 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 16 00:13:09.738387 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 16 00:13:09.744758 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 16 00:13:09.771329 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 16 00:13:09.773482 udevadm[1191]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. May 16 00:13:09.774653 systemd-tmpfiles[1183]: ACLs are not supported, ignoring. May 16 00:13:09.774671 systemd-tmpfiles[1183]: ACLs are not supported, ignoring. May 16 00:13:09.781814 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 16 00:13:09.788567 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 16 00:13:09.844959 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 16 00:13:09.847577 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 16 00:13:09.852533 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 16 00:13:09.854526 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 16 00:13:09.863516 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 16 00:13:09.884019 systemd-tmpfiles[1200]: ACLs are not supported, ignoring. May 16 00:13:09.884033 systemd-tmpfiles[1200]: ACLs are not supported, ignoring. May 16 00:13:09.888806 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 16 00:13:09.949639 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 16 00:13:10.062392 kernel: loop3: detected capacity change from 0 to 109808 May 16 00:13:10.126436 kernel: loop4: detected capacity change from 0 to 224512 May 16 00:13:10.136458 kernel: loop5: detected capacity change from 0 to 151640 May 16 00:13:10.147553 (sd-merge)[1209]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. May 16 00:13:10.148211 (sd-merge)[1209]: Merged extensions into '/usr'. May 16 00:13:10.198802 systemd[1]: Reload requested from client PID 1182 ('systemd-sysext') (unit systemd-sysext.service)... May 16 00:13:10.198821 systemd[1]: Reloading... May 16 00:13:10.281392 zram_generator::config[1238]: No configuration found. May 16 00:13:10.321731 ldconfig[1177]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 16 00:13:10.416688 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 16 00:13:10.483703 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 16 00:13:10.484177 systemd[1]: Reloading finished in 284 ms. May 16 00:13:10.505528 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 16 00:13:10.510281 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 16 00:13:10.512065 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 16 00:13:10.544301 systemd[1]: Starting ensure-sysext.service... May 16 00:13:10.566404 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 16 00:13:10.578466 systemd[1]: Reload requested from client PID 1277 ('systemctl') (unit ensure-sysext.service)... May 16 00:13:10.578484 systemd[1]: Reloading... May 16 00:13:10.646099 zram_generator::config[1309]: No configuration found. May 16 00:13:10.778435 systemd-tmpfiles[1278]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 16 00:13:10.778722 systemd-tmpfiles[1278]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 16 00:13:10.779707 systemd-tmpfiles[1278]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 16 00:13:10.780005 systemd-tmpfiles[1278]: ACLs are not supported, ignoring. May 16 00:13:10.780086 systemd-tmpfiles[1278]: ACLs are not supported, ignoring. May 16 00:13:10.784290 systemd-tmpfiles[1278]: Detected autofs mount point /boot during canonicalization of boot. May 16 00:13:10.784307 systemd-tmpfiles[1278]: Skipping /boot May 16 00:13:10.798582 systemd-tmpfiles[1278]: Detected autofs mount point /boot during canonicalization of boot. May 16 00:13:10.798598 systemd-tmpfiles[1278]: Skipping /boot May 16 00:13:10.858297 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 16 00:13:10.929942 systemd[1]: Reloading finished in 351 ms. May 16 00:13:10.943610 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 16 00:13:10.966354 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 16 00:13:10.981241 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 16 00:13:10.984559 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 16 00:13:10.995521 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 16 00:13:11.000288 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 16 00:13:11.004952 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 16 00:13:11.017622 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 16 00:13:11.024483 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 16 00:13:11.024668 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 16 00:13:11.026091 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 16 00:13:11.029848 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 16 00:13:11.040076 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 16 00:13:11.042034 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 16 00:13:11.042181 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 16 00:13:11.046655 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 16 00:13:11.048291 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 16 00:13:11.050088 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 16 00:13:11.050340 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 16 00:13:11.052613 systemd[1]: modprobe@loop.service: Deactivated successfully. May 16 00:13:11.052853 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 16 00:13:11.055157 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 16 00:13:11.059869 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 16 00:13:11.063538 systemd-udevd[1351]: Using default interface naming scheme 'v255'. May 16 00:13:11.063557 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 16 00:13:11.065981 augenrules[1375]: No rules May 16 00:13:11.070338 systemd[1]: audit-rules.service: Deactivated successfully. May 16 00:13:11.070696 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 16 00:13:11.079759 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 16 00:13:11.086094 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 16 00:13:11.090699 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 16 00:13:11.104300 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 16 00:13:11.106771 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 16 00:13:11.108075 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 16 00:13:11.109689 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 16 00:13:11.113925 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 16 00:13:11.122872 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 16 00:13:11.128574 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 16 00:13:11.130701 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 16 00:13:11.130751 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 16 00:13:11.137085 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 16 00:13:11.141906 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 16 00:13:11.143173 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 16 00:13:11.143209 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 16 00:13:11.143846 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 16 00:13:11.147118 systemd[1]: Finished ensure-sysext.service. May 16 00:13:11.148768 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 16 00:13:11.149459 augenrules[1398]: /sbin/augenrules: No change May 16 00:13:11.154608 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 16 00:13:11.156690 systemd[1]: modprobe@drm.service: Deactivated successfully. May 16 00:13:11.157422 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 16 00:13:11.159825 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 16 00:13:11.160062 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 16 00:13:11.200384 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1423) May 16 00:13:11.209935 augenrules[1437]: No rules May 16 00:13:11.212700 systemd[1]: audit-rules.service: Deactivated successfully. May 16 00:13:11.213195 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 16 00:13:11.215908 systemd[1]: modprobe@loop.service: Deactivated successfully. May 16 00:13:11.216210 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 16 00:13:11.220302 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 16 00:13:11.233604 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. May 16 00:13:11.240447 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 16 00:13:11.240515 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 16 00:13:11.245484 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 16 00:13:11.318725 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 16 00:13:11.324552 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 16 00:13:11.356909 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 16 00:13:11.393933 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 May 16 00:13:11.406996 kernel: ACPI: button: Power Button [PWRF] May 16 00:13:11.426748 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 May 16 00:13:11.421513 systemd-resolved[1349]: Positive Trust Anchors: May 16 00:13:11.421535 systemd-resolved[1349]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 16 00:13:11.421577 systemd-resolved[1349]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 16 00:13:11.439757 systemd-resolved[1349]: Defaulting to hostname 'linux'. May 16 00:13:11.443679 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 16 00:13:11.445249 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 16 00:13:11.491415 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt May 16 00:13:11.528212 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) May 16 00:13:11.528655 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD May 16 00:13:11.493268 systemd-networkd[1420]: lo: Link UP May 16 00:13:11.493282 systemd-networkd[1420]: lo: Gained carrier May 16 00:13:11.495229 systemd-networkd[1420]: Enumeration completed May 16 00:13:11.495652 systemd-networkd[1420]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 16 00:13:11.495657 systemd-networkd[1420]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 16 00:13:11.496297 systemd-networkd[1420]: eth0: Link UP May 16 00:13:11.496301 systemd-networkd[1420]: eth0: Gained carrier May 16 00:13:11.496316 systemd-networkd[1420]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 16 00:13:11.496662 systemd[1]: Started systemd-networkd.service - Network Configuration. May 16 00:13:11.498181 systemd[1]: Reached target network.target - Network. May 16 00:13:11.528932 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 16 00:13:11.535709 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 16 00:13:11.538922 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 16 00:13:11.540459 systemd[1]: Reached target time-set.target - System Time Set. May 16 00:13:11.550701 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 16 00:13:11.612670 systemd-networkd[1420]: eth0: DHCPv4 address 10.0.0.81/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 16 00:13:11.618373 systemd-timesyncd[1451]: Network configuration changed, trying to establish connection. May 16 00:13:11.623061 systemd-timesyncd[1451]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 16 00:13:11.623202 systemd-timesyncd[1451]: Initial clock synchronization to Fri 2025-05-16 00:13:11.977676 UTC. May 16 00:13:11.642093 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 16 00:13:11.647388 kernel: mousedev: PS/2 mouse device common for all mice May 16 00:13:11.660982 kernel: kvm_amd: TSC scaling supported May 16 00:13:11.661041 kernel: kvm_amd: Nested Virtualization enabled May 16 00:13:11.661079 kernel: kvm_amd: Nested Paging enabled May 16 00:13:11.662501 kernel: kvm_amd: LBR virtualization supported May 16 00:13:11.662597 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported May 16 00:13:11.664391 kernel: kvm_amd: Virtual GIF supported May 16 00:13:11.690471 kernel: EDAC MC: Ver: 3.0.0 May 16 00:13:11.725966 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 16 00:13:11.729245 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 16 00:13:11.731135 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 16 00:13:11.761003 lvm[1475]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 16 00:13:11.798548 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 16 00:13:11.804217 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 16 00:13:11.805540 systemd[1]: Reached target sysinit.target - System Initialization. May 16 00:13:11.806900 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 16 00:13:11.808411 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 16 00:13:11.810118 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 16 00:13:11.811503 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 16 00:13:11.812975 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 16 00:13:11.814439 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 16 00:13:11.814486 systemd[1]: Reached target paths.target - Path Units. May 16 00:13:11.815775 systemd[1]: Reached target timers.target - Timer Units. May 16 00:13:11.818420 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 16 00:13:11.821891 systemd[1]: Starting docker.socket - Docker Socket for the API... May 16 00:13:11.826024 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 16 00:13:11.828045 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 16 00:13:11.829791 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 16 00:13:11.834462 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 16 00:13:11.836205 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 16 00:13:11.838932 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 16 00:13:11.840802 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 16 00:13:11.842222 systemd[1]: Reached target sockets.target - Socket Units. May 16 00:13:11.843366 systemd[1]: Reached target basic.target - Basic System. May 16 00:13:11.844420 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 16 00:13:11.844462 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 16 00:13:11.845779 systemd[1]: Starting containerd.service - containerd container runtime... May 16 00:13:11.848286 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 16 00:13:11.850553 lvm[1480]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 16 00:13:11.853564 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 16 00:13:11.856318 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 16 00:13:11.857499 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 16 00:13:11.861637 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 16 00:13:11.864452 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 16 00:13:11.868327 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 16 00:13:11.869281 jq[1483]: false May 16 00:13:11.871640 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 16 00:13:11.877395 systemd[1]: Starting systemd-logind.service - User Login Management... May 16 00:13:11.879658 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 16 00:13:11.883903 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 16 00:13:11.889610 systemd[1]: Starting update-engine.service - Update Engine... May 16 00:13:11.892210 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 16 00:13:11.895517 dbus-daemon[1482]: [system] SELinux support is enabled May 16 00:13:11.897398 extend-filesystems[1484]: Found loop3 May 16 00:13:11.897673 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 16 00:13:11.899383 extend-filesystems[1484]: Found loop4 May 16 00:13:11.900318 extend-filesystems[1484]: Found loop5 May 16 00:13:11.900318 extend-filesystems[1484]: Found sr0 May 16 00:13:11.900318 extend-filesystems[1484]: Found vda May 16 00:13:11.900318 extend-filesystems[1484]: Found vda1 May 16 00:13:11.900318 extend-filesystems[1484]: Found vda2 May 16 00:13:11.900318 extend-filesystems[1484]: Found vda3 May 16 00:13:11.900318 extend-filesystems[1484]: Found usr May 16 00:13:11.900318 extend-filesystems[1484]: Found vda4 May 16 00:13:11.900318 extend-filesystems[1484]: Found vda6 May 16 00:13:11.900318 extend-filesystems[1484]: Found vda7 May 16 00:13:11.906071 extend-filesystems[1484]: Found vda9 May 16 00:13:11.906071 extend-filesystems[1484]: Checking size of /dev/vda9 May 16 00:13:11.903273 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 16 00:13:11.907217 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 16 00:13:11.943825 extend-filesystems[1484]: Resized partition /dev/vda9 May 16 00:13:11.945902 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 16 00:13:11.946375 systemd[1]: motdgen.service: Deactivated successfully. May 16 00:13:11.946634 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 16 00:13:11.949103 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 16 00:13:11.949355 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 16 00:13:11.963475 jq[1498]: true May 16 00:13:11.968567 extend-filesystems[1506]: resize2fs 1.47.2 (1-Jan-2025) May 16 00:13:12.002501 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1408) May 16 00:13:11.969614 (ntainerd)[1508]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 16 00:13:12.003318 update_engine[1494]: I20250516 00:13:11.983435 1494 main.cc:92] Flatcar Update Engine starting May 16 00:13:12.003318 update_engine[1494]: I20250516 00:13:12.002205 1494 update_check_scheduler.cc:74] Next update check in 7m27s May 16 00:13:11.973896 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 16 00:13:11.973931 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 16 00:13:12.005574 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 16 00:13:12.005607 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 16 00:13:12.020209 systemd[1]: Started update-engine.service - Update Engine. May 16 00:13:12.025005 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 16 00:13:12.028388 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 16 00:13:12.030118 jq[1516]: true May 16 00:13:12.037726 systemd-logind[1490]: Watching system buttons on /dev/input/event1 (Power Button) May 16 00:13:12.037752 systemd-logind[1490]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 16 00:13:12.038047 systemd-logind[1490]: New seat seat0. May 16 00:13:12.039658 systemd[1]: Started systemd-logind.service - User Login Management. May 16 00:13:12.060585 tar[1507]: linux-amd64/LICENSE May 16 00:13:12.062422 tar[1507]: linux-amd64/helm May 16 00:13:12.232475 sshd_keygen[1503]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 16 00:13:12.318650 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 16 00:13:12.333494 systemd[1]: Starting issuegen.service - Generate /run/issue... May 16 00:13:12.342352 locksmithd[1521]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 16 00:13:12.354674 systemd[1]: issuegen.service: Deactivated successfully. May 16 00:13:12.354974 systemd[1]: Finished issuegen.service - Generate /run/issue. May 16 00:13:12.358559 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 16 00:13:12.396440 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 16 00:13:12.553895 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 16 00:13:12.571440 systemd[1]: Started getty@tty1.service - Getty on tty1. May 16 00:13:12.573855 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 16 00:13:12.575343 systemd[1]: Reached target getty.target - Login Prompts. May 16 00:13:12.833656 extend-filesystems[1506]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 16 00:13:12.833656 extend-filesystems[1506]: old_desc_blocks = 1, new_desc_blocks = 1 May 16 00:13:12.833656 extend-filesystems[1506]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 16 00:13:12.839263 extend-filesystems[1484]: Resized filesystem in /dev/vda9 May 16 00:13:12.835110 systemd[1]: extend-filesystems.service: Deactivated successfully. May 16 00:13:12.835483 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 16 00:13:12.936619 bash[1537]: Updated "/home/core/.ssh/authorized_keys" May 16 00:13:12.947831 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 16 00:13:12.966788 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 16 00:13:13.001018 containerd[1508]: time="2025-05-16T00:13:12Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 May 16 00:13:13.002150 containerd[1508]: time="2025-05-16T00:13:13.001997607Z" level=info msg="starting containerd" revision=88aa2f531d6c2922003cc7929e51daf1c14caa0a version=v2.0.1 May 16 00:13:13.015864 containerd[1508]: time="2025-05-16T00:13:13.015790577Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="9.783µs" May 16 00:13:13.015864 containerd[1508]: time="2025-05-16T00:13:13.015847861Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 May 16 00:13:13.015864 containerd[1508]: time="2025-05-16T00:13:13.015876430Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 May 16 00:13:13.016123 containerd[1508]: time="2025-05-16T00:13:13.016100998Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 May 16 00:13:13.016184 containerd[1508]: time="2025-05-16T00:13:13.016123436Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 May 16 00:13:13.016184 containerd[1508]: time="2025-05-16T00:13:13.016161435Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 16 00:13:13.016259 containerd[1508]: time="2025-05-16T00:13:13.016234028Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 16 00:13:13.016259 containerd[1508]: time="2025-05-16T00:13:13.016250992Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 16 00:13:13.016684 containerd[1508]: time="2025-05-16T00:13:13.016643249Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 16 00:13:13.016684 containerd[1508]: time="2025-05-16T00:13:13.016672462Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 16 00:13:13.016746 containerd[1508]: time="2025-05-16T00:13:13.016687335Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 16 00:13:13.016746 containerd[1508]: time="2025-05-16T00:13:13.016700012Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 May 16 00:13:13.016864 containerd[1508]: time="2025-05-16T00:13:13.016830983Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 May 16 00:13:13.017194 containerd[1508]: time="2025-05-16T00:13:13.017165298Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 16 00:13:13.017235 containerd[1508]: time="2025-05-16T00:13:13.017212289Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 16 00:13:13.017235 containerd[1508]: time="2025-05-16T00:13:13.017223571Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 May 16 00:13:13.017312 containerd[1508]: time="2025-05-16T00:13:13.017268168Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 May 16 00:13:13.018930 containerd[1508]: time="2025-05-16T00:13:13.018775815Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 May 16 00:13:13.019046 containerd[1508]: time="2025-05-16T00:13:13.019019980Z" level=info msg="metadata content store policy set" policy=shared May 16 00:13:13.079553 systemd-networkd[1420]: eth0: Gained IPv6LL May 16 00:13:13.084385 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 16 00:13:13.086744 systemd[1]: Reached target network-online.target - Network is Online. May 16 00:13:13.090100 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... May 16 00:13:13.093441 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 16 00:13:13.099555 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 16 00:13:13.108379 containerd[1508]: time="2025-05-16T00:13:13.108005586Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 May 16 00:13:13.108379 containerd[1508]: time="2025-05-16T00:13:13.108104293Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 May 16 00:13:13.108379 containerd[1508]: time="2025-05-16T00:13:13.108124203Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 May 16 00:13:13.108379 containerd[1508]: time="2025-05-16T00:13:13.108138034Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 May 16 00:13:13.108379 containerd[1508]: time="2025-05-16T00:13:13.108152563Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 May 16 00:13:13.108379 containerd[1508]: time="2025-05-16T00:13:13.108163856Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 May 16 00:13:13.108379 containerd[1508]: time="2025-05-16T00:13:13.108181206Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 May 16 00:13:13.108379 containerd[1508]: time="2025-05-16T00:13:13.108194246Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 May 16 00:13:13.108379 containerd[1508]: time="2025-05-16T00:13:13.108205611Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 May 16 00:13:13.108379 containerd[1508]: time="2025-05-16T00:13:13.108230320Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 May 16 00:13:13.108379 containerd[1508]: time="2025-05-16T00:13:13.108241892Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 May 16 00:13:13.108379 containerd[1508]: time="2025-05-16T00:13:13.108255277Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 May 16 00:13:13.108775 containerd[1508]: time="2025-05-16T00:13:13.108456749Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 May 16 00:13:13.108775 containerd[1508]: time="2025-05-16T00:13:13.108477367Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 May 16 00:13:13.108775 containerd[1508]: time="2025-05-16T00:13:13.108629070Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 May 16 00:13:13.108775 containerd[1508]: time="2025-05-16T00:13:13.108645763Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 May 16 00:13:13.108775 containerd[1508]: time="2025-05-16T00:13:13.108668212Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 May 16 00:13:13.108775 containerd[1508]: time="2025-05-16T00:13:13.108680420Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 May 16 00:13:13.108775 containerd[1508]: time="2025-05-16T00:13:13.108692338Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 May 16 00:13:13.108775 containerd[1508]: time="2025-05-16T00:13:13.108703016Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 May 16 00:13:13.108775 containerd[1508]: time="2025-05-16T00:13:13.108715995Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 May 16 00:13:13.108775 containerd[1508]: time="2025-05-16T00:13:13.108727141Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 May 16 00:13:13.108775 containerd[1508]: time="2025-05-16T00:13:13.108737153Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 May 16 00:13:13.108980 containerd[1508]: time="2025-05-16T00:13:13.108816470Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" May 16 00:13:13.108980 containerd[1508]: time="2025-05-16T00:13:13.108829834Z" level=info msg="Start snapshots syncer" May 16 00:13:13.108980 containerd[1508]: time="2025-05-16T00:13:13.108855925Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 May 16 00:13:13.111060 containerd[1508]: time="2025-05-16T00:13:13.109086789Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" May 16 00:13:13.111060 containerd[1508]: time="2025-05-16T00:13:13.109132322Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 May 16 00:13:13.111301 containerd[1508]: time="2025-05-16T00:13:13.109204584Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 May 16 00:13:13.111301 containerd[1508]: time="2025-05-16T00:13:13.109350541Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 May 16 00:13:13.111301 containerd[1508]: time="2025-05-16T00:13:13.109370701Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 May 16 00:13:13.111301 containerd[1508]: time="2025-05-16T00:13:13.109382868Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 May 16 00:13:13.111301 containerd[1508]: time="2025-05-16T00:13:13.109407700Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 May 16 00:13:13.111301 containerd[1508]: time="2025-05-16T00:13:13.109419669Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 May 16 00:13:13.111301 containerd[1508]: time="2025-05-16T00:13:13.109431586Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 May 16 00:13:13.111301 containerd[1508]: time="2025-05-16T00:13:13.109443430Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 May 16 00:13:13.111301 containerd[1508]: time="2025-05-16T00:13:13.109476943Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 May 16 00:13:13.111301 containerd[1508]: time="2025-05-16T00:13:13.109489109Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 May 16 00:13:13.111301 containerd[1508]: time="2025-05-16T00:13:13.109499954Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 May 16 00:13:13.111301 containerd[1508]: time="2025-05-16T00:13:13.109534934Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 16 00:13:13.111301 containerd[1508]: time="2025-05-16T00:13:13.109548724Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 16 00:13:13.111301 containerd[1508]: time="2025-05-16T00:13:13.109557852Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 16 00:13:13.111599 containerd[1508]: time="2025-05-16T00:13:13.109568468Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 16 00:13:13.111599 containerd[1508]: time="2025-05-16T00:13:13.109580041Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 May 16 00:13:13.111599 containerd[1508]: time="2025-05-16T00:13:13.109590240Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 May 16 00:13:13.111599 containerd[1508]: time="2025-05-16T00:13:13.109602324Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 May 16 00:13:13.111599 containerd[1508]: time="2025-05-16T00:13:13.109623066Z" level=info msg="runtime interface created" May 16 00:13:13.111599 containerd[1508]: time="2025-05-16T00:13:13.109630550Z" level=info msg="created NRI interface" May 16 00:13:13.111599 containerd[1508]: time="2025-05-16T00:13:13.109640811Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 May 16 00:13:13.111599 containerd[1508]: time="2025-05-16T00:13:13.109652114Z" level=info msg="Connect containerd service" May 16 00:13:13.111599 containerd[1508]: time="2025-05-16T00:13:13.109674532Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 16 00:13:13.114645 containerd[1508]: time="2025-05-16T00:13:13.114614804Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 16 00:13:13.135378 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 16 00:13:13.141574 systemd[1]: Started sshd@0-10.0.0.81:22-10.0.0.1:47858.service - OpenSSH per-connection server daemon (10.0.0.1:47858). May 16 00:13:13.170977 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 16 00:13:13.255813 systemd[1]: coreos-metadata.service: Deactivated successfully. May 16 00:13:13.256304 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. May 16 00:13:13.259061 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 16 00:13:13.329246 tar[1507]: linux-amd64/README.md May 16 00:13:13.354250 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 16 00:13:13.401589 sshd[1582]: Accepted publickey for core from 10.0.0.1 port 47858 ssh2: RSA SHA256:VClJ7K4P6c9/rg+rJGjy/BA6Vatbs9UvxKJcX8Q7ZgI May 16 00:13:13.403114 sshd-session[1582]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:13:13.412139 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 16 00:13:13.482571 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 16 00:13:13.495451 systemd-logind[1490]: New session 1 of user core. May 16 00:13:13.611364 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 16 00:13:13.613096 containerd[1508]: time="2025-05-16T00:13:13.612788106Z" level=info msg="Start subscribing containerd event" May 16 00:13:13.613096 containerd[1508]: time="2025-05-16T00:13:13.612866309Z" level=info msg="Start recovering state" May 16 00:13:13.613096 containerd[1508]: time="2025-05-16T00:13:13.613006855Z" level=info msg="Start event monitor" May 16 00:13:13.613096 containerd[1508]: time="2025-05-16T00:13:13.613035257Z" level=info msg="Start cni network conf syncer for default" May 16 00:13:13.613096 containerd[1508]: time="2025-05-16T00:13:13.613052098Z" level=info msg="Start streaming server" May 16 00:13:13.613096 containerd[1508]: time="2025-05-16T00:13:13.613076514Z" level=info msg="Registered namespace \"k8s.io\" with NRI" May 16 00:13:13.613096 containerd[1508]: time="2025-05-16T00:13:13.613089222Z" level=info msg="runtime interface starting up..." May 16 00:13:13.613096 containerd[1508]: time="2025-05-16T00:13:13.613098422Z" level=info msg="starting plugins..." May 16 00:13:13.613291 containerd[1508]: time="2025-05-16T00:13:13.613121121Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" May 16 00:13:13.613330 containerd[1508]: time="2025-05-16T00:13:13.613302194Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 16 00:13:13.613440 containerd[1508]: time="2025-05-16T00:13:13.613410111Z" level=info msg=serving... address=/run/containerd/containerd.sock May 16 00:13:13.613963 systemd[1]: Started containerd.service - containerd container runtime. May 16 00:13:13.615598 containerd[1508]: time="2025-05-16T00:13:13.614443730Z" level=info msg="containerd successfully booted in 0.620821s" May 16 00:13:13.621788 systemd[1]: Starting user@500.service - User Manager for UID 500... May 16 00:13:13.640259 (systemd)[1609]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 16 00:13:13.643520 systemd-logind[1490]: New session c1 of user core. May 16 00:13:13.805462 systemd[1609]: Queued start job for default target default.target. May 16 00:13:13.814990 systemd[1609]: Created slice app.slice - User Application Slice. May 16 00:13:13.815023 systemd[1609]: Reached target paths.target - Paths. May 16 00:13:13.815071 systemd[1609]: Reached target timers.target - Timers. May 16 00:13:13.817024 systemd[1609]: Starting dbus.socket - D-Bus User Message Bus Socket... May 16 00:13:13.831810 systemd[1609]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 16 00:13:13.831974 systemd[1609]: Reached target sockets.target - Sockets. May 16 00:13:13.832033 systemd[1609]: Reached target basic.target - Basic System. May 16 00:13:13.832082 systemd[1609]: Reached target default.target - Main User Target. May 16 00:13:13.832122 systemd[1609]: Startup finished in 180ms. May 16 00:13:13.832711 systemd[1]: Started user@500.service - User Manager for UID 500. May 16 00:13:13.843560 systemd[1]: Started session-1.scope - Session 1 of User core. May 16 00:13:13.917006 systemd[1]: Started sshd@1-10.0.0.81:22-10.0.0.1:47870.service - OpenSSH per-connection server daemon (10.0.0.1:47870). May 16 00:13:13.973517 sshd[1620]: Accepted publickey for core from 10.0.0.1 port 47870 ssh2: RSA SHA256:VClJ7K4P6c9/rg+rJGjy/BA6Vatbs9UvxKJcX8Q7ZgI May 16 00:13:13.975939 sshd-session[1620]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:13:13.981932 systemd-logind[1490]: New session 2 of user core. May 16 00:13:13.996769 systemd[1]: Started session-2.scope - Session 2 of User core. May 16 00:13:14.056206 sshd[1622]: Connection closed by 10.0.0.1 port 47870 May 16 00:13:14.056610 sshd-session[1620]: pam_unix(sshd:session): session closed for user core May 16 00:13:14.066662 systemd[1]: sshd@1-10.0.0.81:22-10.0.0.1:47870.service: Deactivated successfully. May 16 00:13:14.068651 systemd[1]: session-2.scope: Deactivated successfully. May 16 00:13:14.070130 systemd-logind[1490]: Session 2 logged out. Waiting for processes to exit. May 16 00:13:14.071768 systemd[1]: Started sshd@2-10.0.0.81:22-10.0.0.1:47872.service - OpenSSH per-connection server daemon (10.0.0.1:47872). May 16 00:13:14.074497 systemd-logind[1490]: Removed session 2. May 16 00:13:14.130733 sshd[1627]: Accepted publickey for core from 10.0.0.1 port 47872 ssh2: RSA SHA256:VClJ7K4P6c9/rg+rJGjy/BA6Vatbs9UvxKJcX8Q7ZgI May 16 00:13:14.132807 sshd-session[1627]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:13:14.137793 systemd-logind[1490]: New session 3 of user core. May 16 00:13:14.147536 systemd[1]: Started session-3.scope - Session 3 of User core. May 16 00:13:14.207140 sshd[1630]: Connection closed by 10.0.0.1 port 47872 May 16 00:13:14.207628 sshd-session[1627]: pam_unix(sshd:session): session closed for user core May 16 00:13:14.213441 systemd[1]: sshd@2-10.0.0.81:22-10.0.0.1:47872.service: Deactivated successfully. May 16 00:13:14.216504 systemd[1]: session-3.scope: Deactivated successfully. May 16 00:13:14.217878 systemd-logind[1490]: Session 3 logged out. Waiting for processes to exit. May 16 00:13:14.219004 systemd-logind[1490]: Removed session 3. May 16 00:13:14.269232 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 16 00:13:14.271697 systemd[1]: Reached target multi-user.target - Multi-User System. May 16 00:13:14.273580 systemd[1]: Startup finished in 813ms (kernel) + 9.824s (initrd) + 5.787s (userspace) = 16.425s. May 16 00:13:14.309040 (kubelet)[1640]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 16 00:13:14.767761 kubelet[1640]: E0516 00:13:14.767693 1640 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 16 00:13:14.771707 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 16 00:13:14.771977 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 16 00:13:14.772432 systemd[1]: kubelet.service: Consumed 1.306s CPU time, 267.2M memory peak. May 16 00:13:24.414181 systemd[1]: Started sshd@3-10.0.0.81:22-10.0.0.1:54444.service - OpenSSH per-connection server daemon (10.0.0.1:54444). May 16 00:13:24.467485 sshd[1653]: Accepted publickey for core from 10.0.0.1 port 54444 ssh2: RSA SHA256:VClJ7K4P6c9/rg+rJGjy/BA6Vatbs9UvxKJcX8Q7ZgI May 16 00:13:24.469232 sshd-session[1653]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:13:24.474698 systemd-logind[1490]: New session 4 of user core. May 16 00:13:24.484499 systemd[1]: Started session-4.scope - Session 4 of User core. May 16 00:13:24.538643 sshd[1655]: Connection closed by 10.0.0.1 port 54444 May 16 00:13:24.539103 sshd-session[1653]: pam_unix(sshd:session): session closed for user core May 16 00:13:24.551590 systemd[1]: sshd@3-10.0.0.81:22-10.0.0.1:54444.service: Deactivated successfully. May 16 00:13:24.553504 systemd[1]: session-4.scope: Deactivated successfully. May 16 00:13:24.555106 systemd-logind[1490]: Session 4 logged out. Waiting for processes to exit. May 16 00:13:24.556754 systemd[1]: Started sshd@4-10.0.0.81:22-10.0.0.1:54448.service - OpenSSH per-connection server daemon (10.0.0.1:54448). May 16 00:13:24.557701 systemd-logind[1490]: Removed session 4. May 16 00:13:24.620725 sshd[1660]: Accepted publickey for core from 10.0.0.1 port 54448 ssh2: RSA SHA256:VClJ7K4P6c9/rg+rJGjy/BA6Vatbs9UvxKJcX8Q7ZgI May 16 00:13:24.622496 sshd-session[1660]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:13:24.627322 systemd-logind[1490]: New session 5 of user core. May 16 00:13:24.635567 systemd[1]: Started session-5.scope - Session 5 of User core. May 16 00:13:24.685774 sshd[1663]: Connection closed by 10.0.0.1 port 54448 May 16 00:13:24.686020 sshd-session[1660]: pam_unix(sshd:session): session closed for user core May 16 00:13:24.705204 systemd[1]: sshd@4-10.0.0.81:22-10.0.0.1:54448.service: Deactivated successfully. May 16 00:13:24.707354 systemd[1]: session-5.scope: Deactivated successfully. May 16 00:13:24.709813 systemd-logind[1490]: Session 5 logged out. Waiting for processes to exit. May 16 00:13:24.711439 systemd[1]: Started sshd@5-10.0.0.81:22-10.0.0.1:54450.service - OpenSSH per-connection server daemon (10.0.0.1:54450). May 16 00:13:24.712395 systemd-logind[1490]: Removed session 5. May 16 00:13:24.763601 sshd[1668]: Accepted publickey for core from 10.0.0.1 port 54450 ssh2: RSA SHA256:VClJ7K4P6c9/rg+rJGjy/BA6Vatbs9UvxKJcX8Q7ZgI May 16 00:13:24.765049 sshd-session[1668]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:13:24.769853 systemd-logind[1490]: New session 6 of user core. May 16 00:13:24.785668 systemd[1]: Started session-6.scope - Session 6 of User core. May 16 00:13:24.786914 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 16 00:13:24.788810 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 16 00:13:24.843693 sshd[1672]: Connection closed by 10.0.0.1 port 54450 May 16 00:13:24.844097 sshd-session[1668]: pam_unix(sshd:session): session closed for user core May 16 00:13:24.863601 systemd[1]: sshd@5-10.0.0.81:22-10.0.0.1:54450.service: Deactivated successfully. May 16 00:13:24.866085 systemd[1]: session-6.scope: Deactivated successfully. May 16 00:13:24.868040 systemd-logind[1490]: Session 6 logged out. Waiting for processes to exit. May 16 00:13:24.869498 systemd[1]: Started sshd@6-10.0.0.81:22-10.0.0.1:54456.service - OpenSSH per-connection server daemon (10.0.0.1:54456). May 16 00:13:24.870765 systemd-logind[1490]: Removed session 6. May 16 00:13:24.926869 sshd[1679]: Accepted publickey for core from 10.0.0.1 port 54456 ssh2: RSA SHA256:VClJ7K4P6c9/rg+rJGjy/BA6Vatbs9UvxKJcX8Q7ZgI May 16 00:13:24.928814 sshd-session[1679]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:13:24.934323 systemd-logind[1490]: New session 7 of user core. May 16 00:13:24.941652 systemd[1]: Started session-7.scope - Session 7 of User core. May 16 00:13:25.004446 sudo[1685]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 16 00:13:25.004804 sudo[1685]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 16 00:13:25.008989 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 16 00:13:25.024195 sudo[1685]: pam_unix(sudo:session): session closed for user root May 16 00:13:25.025946 sshd[1682]: Connection closed by 10.0.0.1 port 54456 May 16 00:13:25.026493 sshd-session[1679]: pam_unix(sshd:session): session closed for user core May 16 00:13:25.027851 (kubelet)[1690]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 16 00:13:25.032696 systemd[1]: sshd@6-10.0.0.81:22-10.0.0.1:54456.service: Deactivated successfully. May 16 00:13:25.034869 systemd[1]: session-7.scope: Deactivated successfully. May 16 00:13:25.036847 systemd-logind[1490]: Session 7 logged out. Waiting for processes to exit. May 16 00:13:25.038688 systemd[1]: Started sshd@7-10.0.0.81:22-10.0.0.1:54466.service - OpenSSH per-connection server daemon (10.0.0.1:54466). May 16 00:13:25.039703 systemd-logind[1490]: Removed session 7. May 16 00:13:25.083883 kubelet[1690]: E0516 00:13:25.083827 1690 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 16 00:13:25.089505 sshd[1699]: Accepted publickey for core from 10.0.0.1 port 54466 ssh2: RSA SHA256:VClJ7K4P6c9/rg+rJGjy/BA6Vatbs9UvxKJcX8Q7ZgI May 16 00:13:25.090354 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 16 00:13:25.090589 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 16 00:13:25.090991 systemd[1]: kubelet.service: Consumed 262ms CPU time, 111.2M memory peak. May 16 00:13:25.091171 sshd-session[1699]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:13:25.096760 systemd-logind[1490]: New session 8 of user core. May 16 00:13:25.106769 systemd[1]: Started session-8.scope - Session 8 of User core. May 16 00:13:25.162230 sudo[1707]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 16 00:13:25.162569 sudo[1707]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 16 00:13:25.166211 sudo[1707]: pam_unix(sudo:session): session closed for user root May 16 00:13:25.172462 sudo[1706]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 16 00:13:25.172855 sudo[1706]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 16 00:13:25.182604 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 16 00:13:25.230245 augenrules[1729]: No rules May 16 00:13:25.232006 systemd[1]: audit-rules.service: Deactivated successfully. May 16 00:13:25.232303 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 16 00:13:25.233681 sudo[1706]: pam_unix(sudo:session): session closed for user root May 16 00:13:25.235160 sshd[1705]: Connection closed by 10.0.0.1 port 54466 May 16 00:13:25.235548 sshd-session[1699]: pam_unix(sshd:session): session closed for user core May 16 00:13:25.248594 systemd[1]: sshd@7-10.0.0.81:22-10.0.0.1:54466.service: Deactivated successfully. May 16 00:13:25.250527 systemd[1]: session-8.scope: Deactivated successfully. May 16 00:13:25.252064 systemd-logind[1490]: Session 8 logged out. Waiting for processes to exit. May 16 00:13:25.253436 systemd[1]: Started sshd@8-10.0.0.81:22-10.0.0.1:54470.service - OpenSSH per-connection server daemon (10.0.0.1:54470). May 16 00:13:25.254243 systemd-logind[1490]: Removed session 8. May 16 00:13:25.303116 sshd[1737]: Accepted publickey for core from 10.0.0.1 port 54470 ssh2: RSA SHA256:VClJ7K4P6c9/rg+rJGjy/BA6Vatbs9UvxKJcX8Q7ZgI May 16 00:13:25.304841 sshd-session[1737]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:13:25.309288 systemd-logind[1490]: New session 9 of user core. May 16 00:13:25.322565 systemd[1]: Started session-9.scope - Session 9 of User core. May 16 00:13:25.376435 sudo[1741]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 16 00:13:25.376771 sudo[1741]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 16 00:13:26.091107 systemd[1]: Starting docker.service - Docker Application Container Engine... May 16 00:13:26.102749 (dockerd)[1761]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 16 00:13:26.842488 dockerd[1761]: time="2025-05-16T00:13:26.842407566Z" level=info msg="Starting up" May 16 00:13:26.843653 dockerd[1761]: time="2025-05-16T00:13:26.843609895Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" May 16 00:13:28.384773 dockerd[1761]: time="2025-05-16T00:13:28.384700399Z" level=info msg="Loading containers: start." May 16 00:13:28.620408 kernel: Initializing XFRM netlink socket May 16 00:13:28.708787 systemd-networkd[1420]: docker0: Link UP May 16 00:13:28.802990 dockerd[1761]: time="2025-05-16T00:13:28.802930440Z" level=info msg="Loading containers: done." May 16 00:13:28.832995 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck786971078-merged.mount: Deactivated successfully. May 16 00:13:28.835095 dockerd[1761]: time="2025-05-16T00:13:28.835038126Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 16 00:13:28.835230 dockerd[1761]: time="2025-05-16T00:13:28.835134357Z" level=info msg="Docker daemon" commit=c710b88579fcb5e0d53f96dcae976d79323b9166 containerd-snapshotter=false storage-driver=overlay2 version=27.4.1 May 16 00:13:28.835261 dockerd[1761]: time="2025-05-16T00:13:28.835248726Z" level=info msg="Daemon has completed initialization" May 16 00:13:28.879507 dockerd[1761]: time="2025-05-16T00:13:28.879424359Z" level=info msg="API listen on /run/docker.sock" May 16 00:13:28.879761 systemd[1]: Started docker.service - Docker Application Container Engine. May 16 00:13:29.965346 containerd[1508]: time="2025-05-16T00:13:29.965298775Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.5\"" May 16 00:13:30.718484 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1292120064.mount: Deactivated successfully. May 16 00:13:32.882631 containerd[1508]: time="2025-05-16T00:13:32.882557212Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:13:32.884310 containerd[1508]: time="2025-05-16T00:13:32.884204253Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.5: active requests=0, bytes read=28797811" May 16 00:13:32.886586 containerd[1508]: time="2025-05-16T00:13:32.886513037Z" level=info msg="ImageCreate event name:\"sha256:495c5ce47cf7c8b58655ef50d0f0a9b43c5ae18492059dc9af4c9aacae82a5a4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:13:32.891192 containerd[1508]: time="2025-05-16T00:13:32.891148685Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0bee1bf751fe06009678c0cde7545443ba3a8d2edf71cea4c69cbb5774b9bf47\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:13:32.892557 containerd[1508]: time="2025-05-16T00:13:32.892482512Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.5\" with image id \"sha256:495c5ce47cf7c8b58655ef50d0f0a9b43c5ae18492059dc9af4c9aacae82a5a4\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0bee1bf751fe06009678c0cde7545443ba3a8d2edf71cea4c69cbb5774b9bf47\", size \"28794611\" in 2.927126509s" May 16 00:13:32.892557 containerd[1508]: time="2025-05-16T00:13:32.892542688Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.5\" returns image reference \"sha256:495c5ce47cf7c8b58655ef50d0f0a9b43c5ae18492059dc9af4c9aacae82a5a4\"" May 16 00:13:32.893612 containerd[1508]: time="2025-05-16T00:13:32.893570617Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.5\"" May 16 00:13:34.618665 containerd[1508]: time="2025-05-16T00:13:34.618603023Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:13:34.628383 containerd[1508]: time="2025-05-16T00:13:34.628285389Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.5: active requests=0, bytes read=24782523" May 16 00:13:34.640723 containerd[1508]: time="2025-05-16T00:13:34.640665982Z" level=info msg="ImageCreate event name:\"sha256:85dcaf69f000132c34fa34452e0fd8444bdf360b593fe06b1103680f6ecc7e00\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:13:34.660077 containerd[1508]: time="2025-05-16T00:13:34.660001550Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:79bcf2f5e614c336c02dcea9dfcdf485d7297aed6a21239a99c87f7164f9baca\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:13:34.661179 containerd[1508]: time="2025-05-16T00:13:34.661124986Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.5\" with image id \"sha256:85dcaf69f000132c34fa34452e0fd8444bdf360b593fe06b1103680f6ecc7e00\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:79bcf2f5e614c336c02dcea9dfcdf485d7297aed6a21239a99c87f7164f9baca\", size \"26384363\" in 1.767506148s" May 16 00:13:34.661179 containerd[1508]: time="2025-05-16T00:13:34.661175962Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.5\" returns image reference \"sha256:85dcaf69f000132c34fa34452e0fd8444bdf360b593fe06b1103680f6ecc7e00\"" May 16 00:13:34.661880 containerd[1508]: time="2025-05-16T00:13:34.661854553Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.5\"" May 16 00:13:35.203579 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 16 00:13:35.205687 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 16 00:13:35.400689 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 16 00:13:35.405005 (kubelet)[2037]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 16 00:13:36.105092 kubelet[2037]: E0516 00:13:36.105015 2037 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 16 00:13:36.110069 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 16 00:13:36.110279 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 16 00:13:36.110759 systemd[1]: kubelet.service: Consumed 253ms CPU time, 112.9M memory peak. May 16 00:13:37.212383 containerd[1508]: time="2025-05-16T00:13:37.212291947Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:13:37.213380 containerd[1508]: time="2025-05-16T00:13:37.213292333Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.5: active requests=0, bytes read=19176063" May 16 00:13:37.214849 containerd[1508]: time="2025-05-16T00:13:37.214812872Z" level=info msg="ImageCreate event name:\"sha256:2729fb488407e634105c62238a45a599db1692680526e20844060a7a8197b45a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:13:37.220036 containerd[1508]: time="2025-05-16T00:13:37.219999409Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:f0f39d8b9808c407cacb3a46a5a9ce4d4a4a7cf3b674ba4bd221f5bc90051d2a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:13:37.220932 containerd[1508]: time="2025-05-16T00:13:37.220893650Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.5\" with image id \"sha256:2729fb488407e634105c62238a45a599db1692680526e20844060a7a8197b45a\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:f0f39d8b9808c407cacb3a46a5a9ce4d4a4a7cf3b674ba4bd221f5bc90051d2a\", size \"20777921\" in 2.559007764s" May 16 00:13:37.220980 containerd[1508]: time="2025-05-16T00:13:37.220936829Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.5\" returns image reference \"sha256:2729fb488407e634105c62238a45a599db1692680526e20844060a7a8197b45a\"" May 16 00:13:37.221400 containerd[1508]: time="2025-05-16T00:13:37.221376393Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.5\"" May 16 00:13:39.529676 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2284336603.mount: Deactivated successfully. May 16 00:13:40.052386 containerd[1508]: time="2025-05-16T00:13:40.052308510Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:13:40.060976 containerd[1508]: time="2025-05-16T00:13:40.060881569Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.5: active requests=0, bytes read=30892872" May 16 00:13:40.068468 containerd[1508]: time="2025-05-16T00:13:40.068401043Z" level=info msg="ImageCreate event name:\"sha256:f532b7356fac4d7c4e4f6763bb5a15a43e3bb740c9fb26c85b906a4d971f2363\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:13:40.078249 containerd[1508]: time="2025-05-16T00:13:40.078195066Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:9dc6553459c3319525ba4090a780db1a133d5dee68c08e07f9b9d6ba83b42a0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:13:40.078871 containerd[1508]: time="2025-05-16T00:13:40.078776910Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.5\" with image id \"sha256:f532b7356fac4d7c4e4f6763bb5a15a43e3bb740c9fb26c85b906a4d971f2363\", repo tag \"registry.k8s.io/kube-proxy:v1.32.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:9dc6553459c3319525ba4090a780db1a133d5dee68c08e07f9b9d6ba83b42a0b\", size \"30891891\" in 2.857368111s" May 16 00:13:40.078871 containerd[1508]: time="2025-05-16T00:13:40.078858930Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.5\" returns image reference \"sha256:f532b7356fac4d7c4e4f6763bb5a15a43e3bb740c9fb26c85b906a4d971f2363\"" May 16 00:13:40.079418 containerd[1508]: time="2025-05-16T00:13:40.079385010Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" May 16 00:13:41.183598 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount430965734.mount: Deactivated successfully. May 16 00:13:44.786787 containerd[1508]: time="2025-05-16T00:13:44.786716844Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:13:44.800046 containerd[1508]: time="2025-05-16T00:13:44.799970133Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" May 16 00:13:44.824385 containerd[1508]: time="2025-05-16T00:13:44.824330573Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:13:44.864547 containerd[1508]: time="2025-05-16T00:13:44.864492575Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:13:44.865725 containerd[1508]: time="2025-05-16T00:13:44.865694180Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 4.786276035s" May 16 00:13:44.865795 containerd[1508]: time="2025-05-16T00:13:44.865732342Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" May 16 00:13:44.866718 containerd[1508]: time="2025-05-16T00:13:44.866681706Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 16 00:13:46.203422 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. May 16 00:13:46.205226 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 16 00:13:46.415142 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 16 00:13:46.428729 (kubelet)[2117]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 16 00:13:46.807855 kubelet[2117]: E0516 00:13:46.807783 2117 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 16 00:13:46.812185 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 16 00:13:46.812464 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 16 00:13:46.812871 systemd[1]: kubelet.service: Consumed 238ms CPU time, 113M memory peak. May 16 00:13:47.765707 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3757517346.mount: Deactivated successfully. May 16 00:13:47.926928 containerd[1508]: time="2025-05-16T00:13:47.926836628Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 16 00:13:47.940885 containerd[1508]: time="2025-05-16T00:13:47.940810276Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" May 16 00:13:47.945830 containerd[1508]: time="2025-05-16T00:13:47.945801666Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 16 00:13:47.960506 containerd[1508]: time="2025-05-16T00:13:47.960456662Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 16 00:13:47.961166 containerd[1508]: time="2025-05-16T00:13:47.961123028Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 3.094396083s" May 16 00:13:47.961166 containerd[1508]: time="2025-05-16T00:13:47.961153060Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" May 16 00:13:47.961768 containerd[1508]: time="2025-05-16T00:13:47.961602402Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" May 16 00:13:48.540744 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1444043534.mount: Deactivated successfully. May 16 00:13:52.985633 containerd[1508]: time="2025-05-16T00:13:52.985543342Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:13:53.080503 containerd[1508]: time="2025-05-16T00:13:53.080336859Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551360" May 16 00:13:53.108899 containerd[1508]: time="2025-05-16T00:13:53.108820602Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:13:53.134665 containerd[1508]: time="2025-05-16T00:13:53.134582000Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:13:53.136132 containerd[1508]: time="2025-05-16T00:13:53.136065768Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 5.174430941s" May 16 00:13:53.136132 containerd[1508]: time="2025-05-16T00:13:53.136121307Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" May 16 00:13:55.356773 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 16 00:13:55.356941 systemd[1]: kubelet.service: Consumed 238ms CPU time, 113M memory peak. May 16 00:13:55.359233 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 16 00:13:55.385512 systemd[1]: Reload requested from client PID 2214 ('systemctl') (unit session-9.scope)... May 16 00:13:55.385688 systemd[1]: Reloading... May 16 00:13:55.498398 zram_generator::config[2261]: No configuration found. May 16 00:13:56.088800 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 16 00:13:56.199902 systemd[1]: Reloading finished in 813 ms. May 16 00:13:56.266888 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 16 00:13:56.269497 systemd[1]: kubelet.service: Deactivated successfully. May 16 00:13:56.269775 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 16 00:13:56.269819 systemd[1]: kubelet.service: Consumed 169ms CPU time, 98.3M memory peak. May 16 00:13:56.271526 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 16 00:13:56.480825 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 16 00:13:56.495696 (kubelet)[2309]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 16 00:13:56.532798 kubelet[2309]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 16 00:13:56.533262 kubelet[2309]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 16 00:13:56.533262 kubelet[2309]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 16 00:13:56.533372 kubelet[2309]: I0516 00:13:56.533329 2309 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 16 00:13:56.840286 kubelet[2309]: I0516 00:13:56.840142 2309 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" May 16 00:13:56.840286 kubelet[2309]: I0516 00:13:56.840179 2309 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 16 00:13:56.840557 kubelet[2309]: I0516 00:13:56.840521 2309 server.go:954] "Client rotation is on, will bootstrap in background" May 16 00:13:56.876972 update_engine[1494]: I20250516 00:13:56.876857 1494 update_attempter.cc:509] Updating boot flags... May 16 00:13:56.882756 kubelet[2309]: E0516 00:13:56.881568 2309 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.81:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.81:6443: connect: connection refused" logger="UnhandledError" May 16 00:13:58.887708 kubelet[2309]: I0516 00:13:58.887639 2309 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 16 00:13:58.909401 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2326) May 16 00:13:58.929803 kubelet[2309]: I0516 00:13:58.929708 2309 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 16 00:13:58.939450 kubelet[2309]: I0516 00:13:58.936638 2309 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 16 00:13:58.939450 kubelet[2309]: I0516 00:13:58.936918 2309 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 16 00:13:58.939450 kubelet[2309]: I0516 00:13:58.936947 2309 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 16 00:13:58.939450 kubelet[2309]: I0516 00:13:58.937884 2309 topology_manager.go:138] "Creating topology manager with none policy" May 16 00:13:58.939706 kubelet[2309]: I0516 00:13:58.937902 2309 container_manager_linux.go:304] "Creating device plugin manager" May 16 00:13:58.939706 kubelet[2309]: I0516 00:13:58.938064 2309 state_mem.go:36] "Initialized new in-memory state store" May 16 00:13:58.946390 kubelet[2309]: I0516 00:13:58.944146 2309 kubelet.go:446] "Attempting to sync node with API server" May 16 00:13:58.946390 kubelet[2309]: I0516 00:13:58.944189 2309 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 16 00:13:58.946390 kubelet[2309]: I0516 00:13:58.944227 2309 kubelet.go:352] "Adding apiserver pod source" May 16 00:13:58.946390 kubelet[2309]: I0516 00:13:58.944243 2309 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 16 00:13:58.950342 kubelet[2309]: I0516 00:13:58.950310 2309 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.1" apiVersion="v1" May 16 00:13:58.950732 kubelet[2309]: I0516 00:13:58.950705 2309 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 16 00:13:58.950779 kubelet[2309]: W0516 00:13:58.950771 2309 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 16 00:13:58.954199 kubelet[2309]: W0516 00:13:58.954144 2309 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.81:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.81:6443: connect: connection refused May 16 00:13:58.954275 kubelet[2309]: E0516 00:13:58.954203 2309 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.81:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.81:6443: connect: connection refused" logger="UnhandledError" May 16 00:13:58.954815 kubelet[2309]: W0516 00:13:58.954772 2309 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.81:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.81:6443: connect: connection refused May 16 00:13:58.954870 kubelet[2309]: E0516 00:13:58.954828 2309 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.81:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.81:6443: connect: connection refused" logger="UnhandledError" May 16 00:13:58.955674 kubelet[2309]: I0516 00:13:58.955100 2309 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 16 00:13:58.955850 kubelet[2309]: I0516 00:13:58.955802 2309 server.go:1287] "Started kubelet" May 16 00:13:58.955891 kubelet[2309]: I0516 00:13:58.955877 2309 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 16 00:13:58.956769 kubelet[2309]: I0516 00:13:58.956746 2309 server.go:479] "Adding debug handlers to kubelet server" May 16 00:13:58.957902 kubelet[2309]: I0516 00:13:58.957882 2309 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 16 00:13:58.958291 kubelet[2309]: I0516 00:13:58.958232 2309 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 16 00:13:58.960505 kubelet[2309]: I0516 00:13:58.958858 2309 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 16 00:13:58.960505 kubelet[2309]: I0516 00:13:58.959106 2309 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 16 00:13:58.965322 kubelet[2309]: I0516 00:13:58.965290 2309 volume_manager.go:297] "Starting Kubelet Volume Manager" May 16 00:13:58.965747 kubelet[2309]: E0516 00:13:58.965722 2309 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 00:13:58.966007 kubelet[2309]: I0516 00:13:58.965986 2309 desired_state_of_world_populator.go:150] "Desired state populator starts to run" May 16 00:13:58.966121 kubelet[2309]: I0516 00:13:58.966101 2309 reconciler.go:26] "Reconciler: start to sync state" May 16 00:13:58.966385 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2325) May 16 00:13:58.968208 kubelet[2309]: E0516 00:13:58.968173 2309 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.81:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.81:6443: connect: connection refused" interval="200ms" May 16 00:13:58.974008 kubelet[2309]: I0516 00:13:58.973921 2309 factory.go:221] Registration of the systemd container factory successfully May 16 00:13:58.976584 kubelet[2309]: I0516 00:13:58.976478 2309 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 16 00:13:58.977288 kubelet[2309]: E0516 00:13:58.977236 2309 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.81:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.81:6443: connect: connection refused" logger="UnhandledError" May 16 00:13:58.977350 kubelet[2309]: W0516 00:13:58.977312 2309 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.81:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.81:6443: connect: connection refused May 16 00:13:58.977450 kubelet[2309]: E0516 00:13:58.977384 2309 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.81:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.81:6443: connect: connection refused" logger="UnhandledError" May 16 00:13:58.978773 kubelet[2309]: E0516 00:13:58.978721 2309 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 16 00:13:58.981740 kubelet[2309]: I0516 00:13:58.979220 2309 factory.go:221] Registration of the containerd container factory successfully May 16 00:13:58.985125 kubelet[2309]: E0516 00:13:58.982430 2309 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.81:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.81:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183fd99ab69f381b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-16 00:13:58.955120667 +0000 UTC m=+2.455367160,LastTimestamp:2025-05-16 00:13:58.955120667 +0000 UTC m=+2.455367160,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 16 00:13:59.008131 kubelet[2309]: I0516 00:13:59.008107 2309 cpu_manager.go:221] "Starting CPU manager" policy="none" May 16 00:13:59.008283 kubelet[2309]: I0516 00:13:59.008271 2309 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 16 00:13:59.008341 kubelet[2309]: I0516 00:13:59.008332 2309 state_mem.go:36] "Initialized new in-memory state store" May 16 00:13:59.019333 kubelet[2309]: I0516 00:13:59.017977 2309 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 16 00:13:59.019879 kubelet[2309]: I0516 00:13:59.019447 2309 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 16 00:13:59.019879 kubelet[2309]: I0516 00:13:59.019476 2309 status_manager.go:227] "Starting to sync pod status with apiserver" May 16 00:13:59.019879 kubelet[2309]: I0516 00:13:59.019500 2309 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 16 00:13:59.019879 kubelet[2309]: I0516 00:13:59.019511 2309 kubelet.go:2382] "Starting kubelet main sync loop" May 16 00:13:59.019879 kubelet[2309]: E0516 00:13:59.019565 2309 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 16 00:13:59.020417 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2325) May 16 00:13:59.020458 kubelet[2309]: W0516 00:13:59.020100 2309 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.81:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.81:6443: connect: connection refused May 16 00:13:59.020458 kubelet[2309]: E0516 00:13:59.020150 2309 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.81:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.81:6443: connect: connection refused" logger="UnhandledError" May 16 00:13:59.037566 kubelet[2309]: I0516 00:13:59.037537 2309 policy_none.go:49] "None policy: Start" May 16 00:13:59.037817 kubelet[2309]: I0516 00:13:59.037686 2309 memory_manager.go:186] "Starting memorymanager" policy="None" May 16 00:13:59.037817 kubelet[2309]: I0516 00:13:59.037703 2309 state_mem.go:35] "Initializing new in-memory state store" May 16 00:13:59.066253 kubelet[2309]: E0516 00:13:59.066228 2309 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 00:13:59.067090 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 16 00:13:59.080409 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 16 00:13:59.093211 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 16 00:13:59.094452 kubelet[2309]: I0516 00:13:59.094275 2309 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 16 00:13:59.094598 kubelet[2309]: I0516 00:13:59.094576 2309 eviction_manager.go:189] "Eviction manager: starting control loop" May 16 00:13:59.094598 kubelet[2309]: I0516 00:13:59.094592 2309 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 16 00:13:59.094865 kubelet[2309]: I0516 00:13:59.094844 2309 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 16 00:13:59.095606 kubelet[2309]: E0516 00:13:59.095578 2309 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 16 00:13:59.095644 kubelet[2309]: E0516 00:13:59.095618 2309 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 16 00:13:59.129317 systemd[1]: Created slice kubepods-burstable-pod1ea3a67026ad9aeb6dc562d18847c2f2.slice - libcontainer container kubepods-burstable-pod1ea3a67026ad9aeb6dc562d18847c2f2.slice. May 16 00:13:59.148867 kubelet[2309]: E0516 00:13:59.148211 2309 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 16 00:13:59.151532 systemd[1]: Created slice kubepods-burstable-pod7c751acbcd1525da2f1a64e395f86bdd.slice - libcontainer container kubepods-burstable-pod7c751acbcd1525da2f1a64e395f86bdd.slice. May 16 00:13:59.162000 kubelet[2309]: E0516 00:13:59.161966 2309 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 16 00:13:59.165418 systemd[1]: Created slice kubepods-burstable-pod447e79232307504a6964f3be51e3d64d.slice - libcontainer container kubepods-burstable-pod447e79232307504a6964f3be51e3d64d.slice. May 16 00:13:59.166394 kubelet[2309]: I0516 00:13:59.166336 2309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1ea3a67026ad9aeb6dc562d18847c2f2-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"1ea3a67026ad9aeb6dc562d18847c2f2\") " pod="kube-system/kube-apiserver-localhost" May 16 00:13:59.166394 kubelet[2309]: I0516 00:13:59.166387 2309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 16 00:13:59.166563 kubelet[2309]: I0516 00:13:59.166408 2309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 16 00:13:59.166563 kubelet[2309]: I0516 00:13:59.166426 2309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 16 00:13:59.166563 kubelet[2309]: I0516 00:13:59.166443 2309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/447e79232307504a6964f3be51e3d64d-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"447e79232307504a6964f3be51e3d64d\") " pod="kube-system/kube-scheduler-localhost" May 16 00:13:59.166563 kubelet[2309]: I0516 00:13:59.166459 2309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1ea3a67026ad9aeb6dc562d18847c2f2-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"1ea3a67026ad9aeb6dc562d18847c2f2\") " pod="kube-system/kube-apiserver-localhost" May 16 00:13:59.166563 kubelet[2309]: I0516 00:13:59.166477 2309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 16 00:13:59.166730 kubelet[2309]: I0516 00:13:59.166492 2309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 16 00:13:59.166730 kubelet[2309]: I0516 00:13:59.166506 2309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1ea3a67026ad9aeb6dc562d18847c2f2-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"1ea3a67026ad9aeb6dc562d18847c2f2\") " pod="kube-system/kube-apiserver-localhost" May 16 00:13:59.167456 kubelet[2309]: E0516 00:13:59.167433 2309 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 16 00:13:59.168760 kubelet[2309]: E0516 00:13:59.168734 2309 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.81:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.81:6443: connect: connection refused" interval="400ms" May 16 00:13:59.196164 kubelet[2309]: I0516 00:13:59.196114 2309 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 16 00:13:59.196601 kubelet[2309]: E0516 00:13:59.196555 2309 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.81:6443/api/v1/nodes\": dial tcp 10.0.0.81:6443: connect: connection refused" node="localhost" May 16 00:13:59.398613 kubelet[2309]: I0516 00:13:59.398572 2309 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 16 00:13:59.399174 kubelet[2309]: E0516 00:13:59.399049 2309 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.81:6443/api/v1/nodes\": dial tcp 10.0.0.81:6443: connect: connection refused" node="localhost" May 16 00:13:59.449542 kubelet[2309]: E0516 00:13:59.449497 2309 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:13:59.450379 containerd[1508]: time="2025-05-16T00:13:59.450310716Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:1ea3a67026ad9aeb6dc562d18847c2f2,Namespace:kube-system,Attempt:0,}" May 16 00:13:59.462785 kubelet[2309]: E0516 00:13:59.462751 2309 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:13:59.463335 containerd[1508]: time="2025-05-16T00:13:59.463284604Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:7c751acbcd1525da2f1a64e395f86bdd,Namespace:kube-system,Attempt:0,}" May 16 00:13:59.468855 kubelet[2309]: E0516 00:13:59.468820 2309 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:13:59.469442 containerd[1508]: time="2025-05-16T00:13:59.469397103Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:447e79232307504a6964f3be51e3d64d,Namespace:kube-system,Attempt:0,}" May 16 00:13:59.569415 kubelet[2309]: E0516 00:13:59.569332 2309 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.81:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.81:6443: connect: connection refused" interval="800ms" May 16 00:13:59.800921 kubelet[2309]: I0516 00:13:59.800871 2309 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 16 00:13:59.801304 kubelet[2309]: E0516 00:13:59.801254 2309 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.81:6443/api/v1/nodes\": dial tcp 10.0.0.81:6443: connect: connection refused" node="localhost" May 16 00:14:00.236032 kubelet[2309]: W0516 00:14:00.235941 2309 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.81:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.81:6443: connect: connection refused May 16 00:14:00.236032 kubelet[2309]: E0516 00:14:00.236020 2309 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.81:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.81:6443: connect: connection refused" logger="UnhandledError" May 16 00:14:00.238544 kubelet[2309]: W0516 00:14:00.238507 2309 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.81:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.81:6443: connect: connection refused May 16 00:14:00.238597 kubelet[2309]: E0516 00:14:00.238545 2309 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.81:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.81:6443: connect: connection refused" logger="UnhandledError" May 16 00:14:00.370443 kubelet[2309]: E0516 00:14:00.370377 2309 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.81:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.81:6443: connect: connection refused" interval="1.6s" May 16 00:14:00.380930 kubelet[2309]: W0516 00:14:00.380892 2309 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.81:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.81:6443: connect: connection refused May 16 00:14:00.381031 kubelet[2309]: E0516 00:14:00.380949 2309 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.81:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.81:6443: connect: connection refused" logger="UnhandledError" May 16 00:14:00.497559 containerd[1508]: time="2025-05-16T00:14:00.497409599Z" level=info msg="connecting to shim 88be55a2d882a61cec6220ca0135143c3535deeef5b785675b1c73a6818e27cb" address="unix:///run/containerd/s/4cf5f211ad640234d9b2f083efc80383da7ee5971ac1ddeb08ea456f9671b05c" namespace=k8s.io protocol=ttrpc version=3 May 16 00:14:00.513570 containerd[1508]: time="2025-05-16T00:14:00.510104160Z" level=info msg="connecting to shim bc67774da7692a881b6c36f54852d95743f87b4fdc80709e490f3d4554fa0afa" address="unix:///run/containerd/s/7dafb26f2f2e517c1728b7addf788650022693bba11ddb257f3aac1d84b76420" namespace=k8s.io protocol=ttrpc version=3 May 16 00:14:00.514473 kubelet[2309]: W0516 00:14:00.514236 2309 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.81:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.81:6443: connect: connection refused May 16 00:14:00.514473 kubelet[2309]: E0516 00:14:00.514294 2309 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.81:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.81:6443: connect: connection refused" logger="UnhandledError" May 16 00:14:00.537575 systemd[1]: Started cri-containerd-88be55a2d882a61cec6220ca0135143c3535deeef5b785675b1c73a6818e27cb.scope - libcontainer container 88be55a2d882a61cec6220ca0135143c3535deeef5b785675b1c73a6818e27cb. May 16 00:14:00.541749 systemd[1]: Started cri-containerd-bc67774da7692a881b6c36f54852d95743f87b4fdc80709e490f3d4554fa0afa.scope - libcontainer container bc67774da7692a881b6c36f54852d95743f87b4fdc80709e490f3d4554fa0afa. May 16 00:14:00.589528 containerd[1508]: time="2025-05-16T00:14:00.589459051Z" level=info msg="connecting to shim 9b04c8a7e136bff62742fc6afd3820a5fbc0100943d7ddeb32cccdfbd4e72f0b" address="unix:///run/containerd/s/f59ea3a23296bb77680d9a9f7276964b43c021345758baf17f50260b8c4dc958" namespace=k8s.io protocol=ttrpc version=3 May 16 00:14:00.603003 kubelet[2309]: I0516 00:14:00.602949 2309 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 16 00:14:00.603399 kubelet[2309]: E0516 00:14:00.603308 2309 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.81:6443/api/v1/nodes\": dial tcp 10.0.0.81:6443: connect: connection refused" node="localhost" May 16 00:14:00.620507 systemd[1]: Started cri-containerd-9b04c8a7e136bff62742fc6afd3820a5fbc0100943d7ddeb32cccdfbd4e72f0b.scope - libcontainer container 9b04c8a7e136bff62742fc6afd3820a5fbc0100943d7ddeb32cccdfbd4e72f0b. May 16 00:14:00.665995 containerd[1508]: time="2025-05-16T00:14:00.665948301Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:1ea3a67026ad9aeb6dc562d18847c2f2,Namespace:kube-system,Attempt:0,} returns sandbox id \"bc67774da7692a881b6c36f54852d95743f87b4fdc80709e490f3d4554fa0afa\"" May 16 00:14:00.667203 kubelet[2309]: E0516 00:14:00.667158 2309 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:14:00.669264 containerd[1508]: time="2025-05-16T00:14:00.669234036Z" level=info msg="CreateContainer within sandbox \"bc67774da7692a881b6c36f54852d95743f87b4fdc80709e490f3d4554fa0afa\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 16 00:14:00.684489 containerd[1508]: time="2025-05-16T00:14:00.684449371Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:7c751acbcd1525da2f1a64e395f86bdd,Namespace:kube-system,Attempt:0,} returns sandbox id \"88be55a2d882a61cec6220ca0135143c3535deeef5b785675b1c73a6818e27cb\"" May 16 00:14:00.685030 kubelet[2309]: E0516 00:14:00.684992 2309 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:14:00.686531 containerd[1508]: time="2025-05-16T00:14:00.686493081Z" level=info msg="CreateContainer within sandbox \"88be55a2d882a61cec6220ca0135143c3535deeef5b785675b1c73a6818e27cb\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 16 00:14:00.725116 containerd[1508]: time="2025-05-16T00:14:00.725050881Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:447e79232307504a6964f3be51e3d64d,Namespace:kube-system,Attempt:0,} returns sandbox id \"9b04c8a7e136bff62742fc6afd3820a5fbc0100943d7ddeb32cccdfbd4e72f0b\"" May 16 00:14:00.725798 kubelet[2309]: E0516 00:14:00.725762 2309 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:14:00.727381 containerd[1508]: time="2025-05-16T00:14:00.727332382Z" level=info msg="CreateContainer within sandbox \"9b04c8a7e136bff62742fc6afd3820a5fbc0100943d7ddeb32cccdfbd4e72f0b\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 16 00:14:00.993001 containerd[1508]: time="2025-05-16T00:14:00.992939763Z" level=info msg="Container 5c3e5cc816163ee0316c8c83c20b80063d70617680636bc97d2dedc8283239ce: CDI devices from CRI Config.CDIDevices: []" May 16 00:14:01.027652 containerd[1508]: time="2025-05-16T00:14:01.027604199Z" level=info msg="Container 7678ad3442ffd5f117591ca9b4003e7f0b5241e1c1a4de104068ffb86ebe9c29: CDI devices from CRI Config.CDIDevices: []" May 16 00:14:01.104685 containerd[1508]: time="2025-05-16T00:14:01.104638869Z" level=info msg="Container 2c87099c73d6a8a56529921eafb536d5a4a71b968edb6dc2da6b20af114857a4: CDI devices from CRI Config.CDIDevices: []" May 16 00:14:01.472481 containerd[1508]: time="2025-05-16T00:14:01.472425679Z" level=info msg="CreateContainer within sandbox \"bc67774da7692a881b6c36f54852d95743f87b4fdc80709e490f3d4554fa0afa\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"5c3e5cc816163ee0316c8c83c20b80063d70617680636bc97d2dedc8283239ce\"" May 16 00:14:01.473109 containerd[1508]: time="2025-05-16T00:14:01.473073174Z" level=info msg="StartContainer for \"5c3e5cc816163ee0316c8c83c20b80063d70617680636bc97d2dedc8283239ce\"" May 16 00:14:01.474182 containerd[1508]: time="2025-05-16T00:14:01.474155127Z" level=info msg="connecting to shim 5c3e5cc816163ee0316c8c83c20b80063d70617680636bc97d2dedc8283239ce" address="unix:///run/containerd/s/7dafb26f2f2e517c1728b7addf788650022693bba11ddb257f3aac1d84b76420" protocol=ttrpc version=3 May 16 00:14:01.502615 systemd[1]: Started cri-containerd-5c3e5cc816163ee0316c8c83c20b80063d70617680636bc97d2dedc8283239ce.scope - libcontainer container 5c3e5cc816163ee0316c8c83c20b80063d70617680636bc97d2dedc8283239ce. May 16 00:14:01.810499 containerd[1508]: time="2025-05-16T00:14:01.810351931Z" level=info msg="CreateContainer within sandbox \"88be55a2d882a61cec6220ca0135143c3535deeef5b785675b1c73a6818e27cb\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"7678ad3442ffd5f117591ca9b4003e7f0b5241e1c1a4de104068ffb86ebe9c29\"" May 16 00:14:01.810909 containerd[1508]: time="2025-05-16T00:14:01.810831661Z" level=info msg="CreateContainer within sandbox \"9b04c8a7e136bff62742fc6afd3820a5fbc0100943d7ddeb32cccdfbd4e72f0b\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"2c87099c73d6a8a56529921eafb536d5a4a71b968edb6dc2da6b20af114857a4\"" May 16 00:14:01.811031 containerd[1508]: time="2025-05-16T00:14:01.811008246Z" level=info msg="StartContainer for \"7678ad3442ffd5f117591ca9b4003e7f0b5241e1c1a4de104068ffb86ebe9c29\"" May 16 00:14:01.811668 containerd[1508]: time="2025-05-16T00:14:01.811644085Z" level=info msg="StartContainer for \"2c87099c73d6a8a56529921eafb536d5a4a71b968edb6dc2da6b20af114857a4\"" May 16 00:14:01.812028 containerd[1508]: time="2025-05-16T00:14:01.811688326Z" level=info msg="StartContainer for \"5c3e5cc816163ee0316c8c83c20b80063d70617680636bc97d2dedc8283239ce\" returns successfully" May 16 00:14:01.812738 containerd[1508]: time="2025-05-16T00:14:01.812679791Z" level=info msg="connecting to shim 7678ad3442ffd5f117591ca9b4003e7f0b5241e1c1a4de104068ffb86ebe9c29" address="unix:///run/containerd/s/4cf5f211ad640234d9b2f083efc80383da7ee5971ac1ddeb08ea456f9671b05c" protocol=ttrpc version=3 May 16 00:14:01.812974 containerd[1508]: time="2025-05-16T00:14:01.812940459Z" level=info msg="connecting to shim 2c87099c73d6a8a56529921eafb536d5a4a71b968edb6dc2da6b20af114857a4" address="unix:///run/containerd/s/f59ea3a23296bb77680d9a9f7276964b43c021345758baf17f50260b8c4dc958" protocol=ttrpc version=3 May 16 00:14:01.839525 systemd[1]: Started cri-containerd-7678ad3442ffd5f117591ca9b4003e7f0b5241e1c1a4de104068ffb86ebe9c29.scope - libcontainer container 7678ad3442ffd5f117591ca9b4003e7f0b5241e1c1a4de104068ffb86ebe9c29. May 16 00:14:01.844118 systemd[1]: Started cri-containerd-2c87099c73d6a8a56529921eafb536d5a4a71b968edb6dc2da6b20af114857a4.scope - libcontainer container 2c87099c73d6a8a56529921eafb536d5a4a71b968edb6dc2da6b20af114857a4. May 16 00:14:01.987511 containerd[1508]: time="2025-05-16T00:14:01.987019453Z" level=info msg="StartContainer for \"7678ad3442ffd5f117591ca9b4003e7f0b5241e1c1a4de104068ffb86ebe9c29\" returns successfully" May 16 00:14:01.987511 containerd[1508]: time="2025-05-16T00:14:01.987131910Z" level=info msg="StartContainer for \"2c87099c73d6a8a56529921eafb536d5a4a71b968edb6dc2da6b20af114857a4\" returns successfully" May 16 00:14:02.033047 kubelet[2309]: E0516 00:14:02.032435 2309 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 16 00:14:02.033047 kubelet[2309]: E0516 00:14:02.032566 2309 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:14:02.036449 kubelet[2309]: E0516 00:14:02.035756 2309 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 16 00:14:02.036449 kubelet[2309]: E0516 00:14:02.035874 2309 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:14:02.039000 kubelet[2309]: E0516 00:14:02.038836 2309 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 16 00:14:02.039000 kubelet[2309]: E0516 00:14:02.038958 2309 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:14:02.205562 kubelet[2309]: I0516 00:14:02.205531 2309 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 16 00:14:02.845538 kubelet[2309]: E0516 00:14:02.844558 2309 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" May 16 00:14:02.951865 kubelet[2309]: I0516 00:14:02.951818 2309 apiserver.go:52] "Watching apiserver" May 16 00:14:02.966417 kubelet[2309]: I0516 00:14:02.966351 2309 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" May 16 00:14:03.041926 kubelet[2309]: E0516 00:14:03.041890 2309 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 16 00:14:03.042431 kubelet[2309]: E0516 00:14:03.042011 2309 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 16 00:14:03.042431 kubelet[2309]: E0516 00:14:03.042045 2309 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:14:03.042431 kubelet[2309]: E0516 00:14:03.042143 2309 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:14:03.042786 kubelet[2309]: E0516 00:14:03.042757 2309 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 16 00:14:03.042861 kubelet[2309]: E0516 00:14:03.042843 2309 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:14:03.086580 kubelet[2309]: I0516 00:14:03.086534 2309 kubelet_node_status.go:78] "Successfully registered node" node="localhost" May 16 00:14:03.169919 kubelet[2309]: I0516 00:14:03.169726 2309 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 16 00:14:03.212034 kubelet[2309]: E0516 00:14:03.211974 2309 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" May 16 00:14:03.212034 kubelet[2309]: I0516 00:14:03.212008 2309 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 16 00:14:03.213487 kubelet[2309]: E0516 00:14:03.213464 2309 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" May 16 00:14:03.213487 kubelet[2309]: I0516 00:14:03.213481 2309 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 16 00:14:03.214996 kubelet[2309]: E0516 00:14:03.214975 2309 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" May 16 00:14:04.042171 kubelet[2309]: I0516 00:14:04.042131 2309 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 16 00:14:04.042775 kubelet[2309]: I0516 00:14:04.042204 2309 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 16 00:14:04.095897 kubelet[2309]: E0516 00:14:04.095852 2309 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:14:04.096033 kubelet[2309]: E0516 00:14:04.095995 2309 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:14:05.044204 kubelet[2309]: E0516 00:14:05.044161 2309 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:14:05.044825 kubelet[2309]: I0516 00:14:05.044211 2309 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 16 00:14:05.051114 kubelet[2309]: E0516 00:14:05.051069 2309 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 16 00:14:05.051305 kubelet[2309]: E0516 00:14:05.051199 2309 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:14:05.441088 systemd[1]: Reload requested from client PID 2602 ('systemctl') (unit session-9.scope)... May 16 00:14:05.441103 systemd[1]: Reloading... May 16 00:14:05.528392 zram_generator::config[2649]: No configuration found. May 16 00:14:05.641927 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 16 00:14:05.762279 systemd[1]: Reloading finished in 320 ms. May 16 00:14:05.793589 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 16 00:14:05.810779 systemd[1]: kubelet.service: Deactivated successfully. May 16 00:14:05.811073 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 16 00:14:05.811127 systemd[1]: kubelet.service: Consumed 960ms CPU time, 136.1M memory peak. May 16 00:14:05.813223 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 16 00:14:06.043004 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 16 00:14:06.054930 (kubelet)[2691]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 16 00:14:06.096736 kubelet[2691]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 16 00:14:06.096736 kubelet[2691]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 16 00:14:06.096736 kubelet[2691]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 16 00:14:06.097167 kubelet[2691]: I0516 00:14:06.096793 2691 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 16 00:14:06.103407 kubelet[2691]: I0516 00:14:06.103328 2691 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" May 16 00:14:06.103407 kubelet[2691]: I0516 00:14:06.103388 2691 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 16 00:14:06.104062 kubelet[2691]: I0516 00:14:06.103680 2691 server.go:954] "Client rotation is on, will bootstrap in background" May 16 00:14:06.104916 kubelet[2691]: I0516 00:14:06.104873 2691 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 16 00:14:06.107204 kubelet[2691]: I0516 00:14:06.107162 2691 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 16 00:14:06.115033 kubelet[2691]: I0516 00:14:06.114970 2691 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 16 00:14:06.120384 kubelet[2691]: I0516 00:14:06.120315 2691 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 16 00:14:06.120790 kubelet[2691]: I0516 00:14:06.120734 2691 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 16 00:14:06.121024 kubelet[2691]: I0516 00:14:06.120782 2691 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 16 00:14:06.121125 kubelet[2691]: I0516 00:14:06.121031 2691 topology_manager.go:138] "Creating topology manager with none policy" May 16 00:14:06.121125 kubelet[2691]: I0516 00:14:06.121043 2691 container_manager_linux.go:304] "Creating device plugin manager" May 16 00:14:06.121125 kubelet[2691]: I0516 00:14:06.121105 2691 state_mem.go:36] "Initialized new in-memory state store" May 16 00:14:06.121933 kubelet[2691]: I0516 00:14:06.121303 2691 kubelet.go:446] "Attempting to sync node with API server" May 16 00:14:06.121933 kubelet[2691]: I0516 00:14:06.121337 2691 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 16 00:14:06.121933 kubelet[2691]: I0516 00:14:06.121380 2691 kubelet.go:352] "Adding apiserver pod source" May 16 00:14:06.121933 kubelet[2691]: I0516 00:14:06.121393 2691 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 16 00:14:06.124787 kubelet[2691]: I0516 00:14:06.122884 2691 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.1" apiVersion="v1" May 16 00:14:06.124787 kubelet[2691]: I0516 00:14:06.123498 2691 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 16 00:14:06.124787 kubelet[2691]: I0516 00:14:06.124139 2691 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 16 00:14:06.124787 kubelet[2691]: I0516 00:14:06.124171 2691 server.go:1287] "Started kubelet" May 16 00:14:06.126465 kubelet[2691]: I0516 00:14:06.125553 2691 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 16 00:14:06.126465 kubelet[2691]: I0516 00:14:06.126455 2691 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 16 00:14:06.126702 kubelet[2691]: I0516 00:14:06.126505 2691 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 16 00:14:06.126886 kubelet[2691]: I0516 00:14:06.126853 2691 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 16 00:14:06.132636 kubelet[2691]: I0516 00:14:06.130067 2691 server.go:479] "Adding debug handlers to kubelet server" May 16 00:14:06.136328 kubelet[2691]: I0516 00:14:06.136292 2691 volume_manager.go:297] "Starting Kubelet Volume Manager" May 16 00:14:06.136753 kubelet[2691]: I0516 00:14:06.136728 2691 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 16 00:14:06.137147 kubelet[2691]: E0516 00:14:06.137118 2691 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 00:14:06.140745 kubelet[2691]: I0516 00:14:06.140711 2691 factory.go:221] Registration of the systemd container factory successfully May 16 00:14:06.141023 kubelet[2691]: I0516 00:14:06.140996 2691 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 16 00:14:06.143038 kubelet[2691]: E0516 00:14:06.143005 2691 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 16 00:14:06.144046 kubelet[2691]: I0516 00:14:06.143999 2691 reconciler.go:26] "Reconciler: start to sync state" May 16 00:14:06.144437 kubelet[2691]: I0516 00:14:06.144064 2691 factory.go:221] Registration of the containerd container factory successfully May 16 00:14:06.144500 kubelet[2691]: I0516 00:14:06.144451 2691 desired_state_of_world_populator.go:150] "Desired state populator starts to run" May 16 00:14:06.146950 kubelet[2691]: I0516 00:14:06.146887 2691 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 16 00:14:06.148314 kubelet[2691]: I0516 00:14:06.148283 2691 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 16 00:14:06.148314 kubelet[2691]: I0516 00:14:06.148312 2691 status_manager.go:227] "Starting to sync pod status with apiserver" May 16 00:14:06.148418 kubelet[2691]: I0516 00:14:06.148333 2691 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 16 00:14:06.148418 kubelet[2691]: I0516 00:14:06.148341 2691 kubelet.go:2382] "Starting kubelet main sync loop" May 16 00:14:06.148482 kubelet[2691]: E0516 00:14:06.148427 2691 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 16 00:14:06.179929 kubelet[2691]: I0516 00:14:06.179894 2691 cpu_manager.go:221] "Starting CPU manager" policy="none" May 16 00:14:06.179929 kubelet[2691]: I0516 00:14:06.179916 2691 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 16 00:14:06.179929 kubelet[2691]: I0516 00:14:06.179933 2691 state_mem.go:36] "Initialized new in-memory state store" May 16 00:14:06.180118 kubelet[2691]: I0516 00:14:06.180081 2691 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 16 00:14:06.180118 kubelet[2691]: I0516 00:14:06.180092 2691 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 16 00:14:06.180118 kubelet[2691]: I0516 00:14:06.180110 2691 policy_none.go:49] "None policy: Start" May 16 00:14:06.180118 kubelet[2691]: I0516 00:14:06.180119 2691 memory_manager.go:186] "Starting memorymanager" policy="None" May 16 00:14:06.180209 kubelet[2691]: I0516 00:14:06.180129 2691 state_mem.go:35] "Initializing new in-memory state store" May 16 00:14:06.180234 kubelet[2691]: I0516 00:14:06.180224 2691 state_mem.go:75] "Updated machine memory state" May 16 00:14:06.184177 kubelet[2691]: I0516 00:14:06.184122 2691 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 16 00:14:06.184394 kubelet[2691]: I0516 00:14:06.184313 2691 eviction_manager.go:189] "Eviction manager: starting control loop" May 16 00:14:06.184394 kubelet[2691]: I0516 00:14:06.184326 2691 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 16 00:14:06.184546 kubelet[2691]: I0516 00:14:06.184511 2691 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 16 00:14:06.185684 kubelet[2691]: E0516 00:14:06.185404 2691 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 16 00:14:06.249565 kubelet[2691]: I0516 00:14:06.249521 2691 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 16 00:14:06.249723 kubelet[2691]: I0516 00:14:06.249521 2691 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 16 00:14:06.249800 kubelet[2691]: I0516 00:14:06.249536 2691 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 16 00:14:06.256765 kubelet[2691]: E0516 00:14:06.256697 2691 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" May 16 00:14:06.256906 kubelet[2691]: E0516 00:14:06.256775 2691 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 16 00:14:06.289395 kubelet[2691]: I0516 00:14:06.289340 2691 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 16 00:14:06.303031 kubelet[2691]: I0516 00:14:06.302911 2691 kubelet_node_status.go:124] "Node was previously registered" node="localhost" May 16 00:14:06.303031 kubelet[2691]: I0516 00:14:06.303000 2691 kubelet_node_status.go:78] "Successfully registered node" node="localhost" May 16 00:14:06.446525 kubelet[2691]: I0516 00:14:06.446443 2691 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1ea3a67026ad9aeb6dc562d18847c2f2-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"1ea3a67026ad9aeb6dc562d18847c2f2\") " pod="kube-system/kube-apiserver-localhost" May 16 00:14:06.446525 kubelet[2691]: I0516 00:14:06.446505 2691 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 16 00:14:06.446525 kubelet[2691]: I0516 00:14:06.446538 2691 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 16 00:14:06.446910 kubelet[2691]: I0516 00:14:06.446590 2691 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/447e79232307504a6964f3be51e3d64d-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"447e79232307504a6964f3be51e3d64d\") " pod="kube-system/kube-scheduler-localhost" May 16 00:14:06.446910 kubelet[2691]: I0516 00:14:06.446614 2691 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 16 00:14:06.446910 kubelet[2691]: I0516 00:14:06.446647 2691 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1ea3a67026ad9aeb6dc562d18847c2f2-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"1ea3a67026ad9aeb6dc562d18847c2f2\") " pod="kube-system/kube-apiserver-localhost" May 16 00:14:06.446910 kubelet[2691]: I0516 00:14:06.446667 2691 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1ea3a67026ad9aeb6dc562d18847c2f2-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"1ea3a67026ad9aeb6dc562d18847c2f2\") " pod="kube-system/kube-apiserver-localhost" May 16 00:14:06.446910 kubelet[2691]: I0516 00:14:06.446687 2691 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 16 00:14:06.447037 kubelet[2691]: I0516 00:14:06.446705 2691 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 16 00:14:06.557347 kubelet[2691]: E0516 00:14:06.556992 2691 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:14:06.557347 kubelet[2691]: E0516 00:14:06.557062 2691 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:14:06.557347 kubelet[2691]: E0516 00:14:06.557110 2691 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:14:07.122817 kubelet[2691]: I0516 00:14:07.122749 2691 apiserver.go:52] "Watching apiserver" May 16 00:14:07.145324 kubelet[2691]: I0516 00:14:07.145282 2691 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" May 16 00:14:07.165933 kubelet[2691]: I0516 00:14:07.165807 2691 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 16 00:14:07.165933 kubelet[2691]: I0516 00:14:07.165837 2691 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 16 00:14:07.166139 kubelet[2691]: E0516 00:14:07.166038 2691 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:14:07.302986 kubelet[2691]: E0516 00:14:07.302939 2691 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" May 16 00:14:07.303191 kubelet[2691]: E0516 00:14:07.303142 2691 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:14:07.303916 kubelet[2691]: E0516 00:14:07.303872 2691 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 16 00:14:07.304004 kubelet[2691]: E0516 00:14:07.303983 2691 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:14:07.407823 kubelet[2691]: I0516 00:14:07.407654 2691 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.407609431 podStartE2EDuration="3.407609431s" podCreationTimestamp="2025-05-16 00:14:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-16 00:14:07.40746788 +0000 UTC m=+1.347921622" watchObservedRunningTime="2025-05-16 00:14:07.407609431 +0000 UTC m=+1.348063173" May 16 00:14:07.523341 kubelet[2691]: I0516 00:14:07.523273 2691 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.523251642 podStartE2EDuration="1.523251642s" podCreationTimestamp="2025-05-16 00:14:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-16 00:14:07.523245308 +0000 UTC m=+1.463699050" watchObservedRunningTime="2025-05-16 00:14:07.523251642 +0000 UTC m=+1.463705384" May 16 00:14:07.575828 kubelet[2691]: I0516 00:14:07.575748 2691 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=3.575728243 podStartE2EDuration="3.575728243s" podCreationTimestamp="2025-05-16 00:14:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-16 00:14:07.538522694 +0000 UTC m=+1.478976426" watchObservedRunningTime="2025-05-16 00:14:07.575728243 +0000 UTC m=+1.516181985" May 16 00:14:08.167979 kubelet[2691]: E0516 00:14:08.167936 2691 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:14:08.168445 kubelet[2691]: E0516 00:14:08.168051 2691 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:14:08.184711 kubelet[2691]: E0516 00:14:08.184576 2691 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:14:09.169412 kubelet[2691]: E0516 00:14:09.169356 2691 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:14:10.711075 kubelet[2691]: E0516 00:14:10.709285 2691 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:14:11.909279 kubelet[2691]: I0516 00:14:11.908980 2691 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 16 00:14:11.916703 containerd[1508]: time="2025-05-16T00:14:11.916634838Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 16 00:14:11.917264 kubelet[2691]: I0516 00:14:11.916915 2691 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 16 00:14:12.773500 systemd[1]: Created slice kubepods-besteffort-podcb30f4af_0048_4d9d_a5b1_1aacf7a0db74.slice - libcontainer container kubepods-besteffort-podcb30f4af_0048_4d9d_a5b1_1aacf7a0db74.slice. May 16 00:14:12.804462 kubelet[2691]: I0516 00:14:12.804263 2691 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/cb30f4af-0048-4d9d-a5b1-1aacf7a0db74-kube-proxy\") pod \"kube-proxy-5gcf4\" (UID: \"cb30f4af-0048-4d9d-a5b1-1aacf7a0db74\") " pod="kube-system/kube-proxy-5gcf4" May 16 00:14:12.804462 kubelet[2691]: I0516 00:14:12.804328 2691 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cb30f4af-0048-4d9d-a5b1-1aacf7a0db74-xtables-lock\") pod \"kube-proxy-5gcf4\" (UID: \"cb30f4af-0048-4d9d-a5b1-1aacf7a0db74\") " pod="kube-system/kube-proxy-5gcf4" May 16 00:14:12.804462 kubelet[2691]: I0516 00:14:12.804354 2691 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cb30f4af-0048-4d9d-a5b1-1aacf7a0db74-lib-modules\") pod \"kube-proxy-5gcf4\" (UID: \"cb30f4af-0048-4d9d-a5b1-1aacf7a0db74\") " pod="kube-system/kube-proxy-5gcf4" May 16 00:14:12.804462 kubelet[2691]: I0516 00:14:12.804435 2691 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bdhwx\" (UniqueName: \"kubernetes.io/projected/cb30f4af-0048-4d9d-a5b1-1aacf7a0db74-kube-api-access-bdhwx\") pod \"kube-proxy-5gcf4\" (UID: \"cb30f4af-0048-4d9d-a5b1-1aacf7a0db74\") " pod="kube-system/kube-proxy-5gcf4" May 16 00:14:13.090316 kubelet[2691]: E0516 00:14:13.090116 2691 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:14:13.091077 containerd[1508]: time="2025-05-16T00:14:13.091006297Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5gcf4,Uid:cb30f4af-0048-4d9d-a5b1-1aacf7a0db74,Namespace:kube-system,Attempt:0,}" May 16 00:14:13.498063 systemd[1]: Created slice kubepods-besteffort-poda645cc7d_edd1_4089_9c61_6d9a3e279f5c.slice - libcontainer container kubepods-besteffort-poda645cc7d_edd1_4089_9c61_6d9a3e279f5c.slice. May 16 00:14:13.515124 kubelet[2691]: I0516 00:14:13.511489 2691 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/a645cc7d-edd1-4089-9c61-6d9a3e279f5c-var-lib-calico\") pod \"tigera-operator-844669ff44-thjn5\" (UID: \"a645cc7d-edd1-4089-9c61-6d9a3e279f5c\") " pod="tigera-operator/tigera-operator-844669ff44-thjn5" May 16 00:14:13.515124 kubelet[2691]: I0516 00:14:13.512869 2691 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nnhft\" (UniqueName: \"kubernetes.io/projected/a645cc7d-edd1-4089-9c61-6d9a3e279f5c-kube-api-access-nnhft\") pod \"tigera-operator-844669ff44-thjn5\" (UID: \"a645cc7d-edd1-4089-9c61-6d9a3e279f5c\") " pod="tigera-operator/tigera-operator-844669ff44-thjn5" May 16 00:14:13.669981 containerd[1508]: time="2025-05-16T00:14:13.668836363Z" level=info msg="connecting to shim 08e1ebc253e449e87552f74cae00f491ed87f32f59a085196270f005b635f9b0" address="unix:///run/containerd/s/2c9215dfa61cfb182273683cb365224e7cf7a51c4d64e891232c0e5798d005e3" namespace=k8s.io protocol=ttrpc version=3 May 16 00:14:13.723721 systemd[1]: Started cri-containerd-08e1ebc253e449e87552f74cae00f491ed87f32f59a085196270f005b635f9b0.scope - libcontainer container 08e1ebc253e449e87552f74cae00f491ed87f32f59a085196270f005b635f9b0. May 16 00:14:13.809135 containerd[1508]: time="2025-05-16T00:14:13.807075566Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-844669ff44-thjn5,Uid:a645cc7d-edd1-4089-9c61-6d9a3e279f5c,Namespace:tigera-operator,Attempt:0,}" May 16 00:14:13.817857 containerd[1508]: time="2025-05-16T00:14:13.817480459Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5gcf4,Uid:cb30f4af-0048-4d9d-a5b1-1aacf7a0db74,Namespace:kube-system,Attempt:0,} returns sandbox id \"08e1ebc253e449e87552f74cae00f491ed87f32f59a085196270f005b635f9b0\"" May 16 00:14:13.836943 kubelet[2691]: E0516 00:14:13.833353 2691 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:14:13.842189 containerd[1508]: time="2025-05-16T00:14:13.839640982Z" level=info msg="CreateContainer within sandbox \"08e1ebc253e449e87552f74cae00f491ed87f32f59a085196270f005b635f9b0\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 16 00:14:13.931230 containerd[1508]: time="2025-05-16T00:14:13.931168594Z" level=info msg="Container 8b436b2be236652a721013ae9be29a6e5dda1990f0e261ceef8b6977f22461b0: CDI devices from CRI Config.CDIDevices: []" May 16 00:14:13.941107 containerd[1508]: time="2025-05-16T00:14:13.941019136Z" level=info msg="connecting to shim 1416db4ca33ba572d305aab54d055c27cebcf258e8d7ae261b1aae6d1199cfe6" address="unix:///run/containerd/s/b2fce56c540d8a4a30c03ce57b4e34f287987a2afdacb803dcdbc0881f2c775e" namespace=k8s.io protocol=ttrpc version=3 May 16 00:14:13.956244 containerd[1508]: time="2025-05-16T00:14:13.956192487Z" level=info msg="CreateContainer within sandbox \"08e1ebc253e449e87552f74cae00f491ed87f32f59a085196270f005b635f9b0\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"8b436b2be236652a721013ae9be29a6e5dda1990f0e261ceef8b6977f22461b0\"" May 16 00:14:13.959678 containerd[1508]: time="2025-05-16T00:14:13.959635271Z" level=info msg="StartContainer for \"8b436b2be236652a721013ae9be29a6e5dda1990f0e261ceef8b6977f22461b0\"" May 16 00:14:13.962276 containerd[1508]: time="2025-05-16T00:14:13.962242915Z" level=info msg="connecting to shim 8b436b2be236652a721013ae9be29a6e5dda1990f0e261ceef8b6977f22461b0" address="unix:///run/containerd/s/2c9215dfa61cfb182273683cb365224e7cf7a51c4d64e891232c0e5798d005e3" protocol=ttrpc version=3 May 16 00:14:14.027673 systemd[1]: Started cri-containerd-8b436b2be236652a721013ae9be29a6e5dda1990f0e261ceef8b6977f22461b0.scope - libcontainer container 8b436b2be236652a721013ae9be29a6e5dda1990f0e261ceef8b6977f22461b0. May 16 00:14:14.056711 systemd[1]: Started cri-containerd-1416db4ca33ba572d305aab54d055c27cebcf258e8d7ae261b1aae6d1199cfe6.scope - libcontainer container 1416db4ca33ba572d305aab54d055c27cebcf258e8d7ae261b1aae6d1199cfe6. May 16 00:14:14.256120 containerd[1508]: time="2025-05-16T00:14:14.256055116Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-844669ff44-thjn5,Uid:a645cc7d-edd1-4089-9c61-6d9a3e279f5c,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"1416db4ca33ba572d305aab54d055c27cebcf258e8d7ae261b1aae6d1199cfe6\"" May 16 00:14:14.257861 containerd[1508]: time="2025-05-16T00:14:14.257797984Z" level=info msg="StartContainer for \"8b436b2be236652a721013ae9be29a6e5dda1990f0e261ceef8b6977f22461b0\" returns successfully" May 16 00:14:14.260155 containerd[1508]: time="2025-05-16T00:14:14.259220301Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.0\"" May 16 00:14:15.298720 kubelet[2691]: E0516 00:14:15.293895 2691 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:14:15.379404 kubelet[2691]: I0516 00:14:15.375890 2691 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-5gcf4" podStartSLOduration=3.375862945 podStartE2EDuration="3.375862945s" podCreationTimestamp="2025-05-16 00:14:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-16 00:14:15.372043403 +0000 UTC m=+9.312497145" watchObservedRunningTime="2025-05-16 00:14:15.375862945 +0000 UTC m=+9.316316687" May 16 00:14:16.655469 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1584915486.mount: Deactivated successfully. May 16 00:14:18.188883 kubelet[2691]: E0516 00:14:18.188850 2691 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:14:18.829804 containerd[1508]: time="2025-05-16T00:14:18.829710335Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:14:18.849428 containerd[1508]: time="2025-05-16T00:14:18.849331040Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.0: active requests=0, bytes read=25055451" May 16 00:14:18.867090 containerd[1508]: time="2025-05-16T00:14:18.867020094Z" level=info msg="ImageCreate event name:\"sha256:5e43c1322619406528ff596056dfeb70cb8d20c5c00439feb752a7725302e033\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:14:18.877166 containerd[1508]: time="2025-05-16T00:14:18.877097159Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:e0a34b265aebce1a2db906d8dad99190706e8bf3910cae626b9c2eb6bbb21775\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:14:18.878200 containerd[1508]: time="2025-05-16T00:14:18.878058800Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.0\" with image id \"sha256:5e43c1322619406528ff596056dfeb70cb8d20c5c00439feb752a7725302e033\", repo tag \"quay.io/tigera/operator:v1.38.0\", repo digest \"quay.io/tigera/operator@sha256:e0a34b265aebce1a2db906d8dad99190706e8bf3910cae626b9c2eb6bbb21775\", size \"25051446\" in 4.618790217s" May 16 00:14:18.878200 containerd[1508]: time="2025-05-16T00:14:18.878100978Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.0\" returns image reference \"sha256:5e43c1322619406528ff596056dfeb70cb8d20c5c00439feb752a7725302e033\"" May 16 00:14:18.881673 containerd[1508]: time="2025-05-16T00:14:18.881635260Z" level=info msg="CreateContainer within sandbox \"1416db4ca33ba572d305aab54d055c27cebcf258e8d7ae261b1aae6d1199cfe6\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" May 16 00:14:18.882597 kubelet[2691]: E0516 00:14:18.882143 2691 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:14:18.966695 containerd[1508]: time="2025-05-16T00:14:18.964461702Z" level=info msg="Container 041ac35a859351488c045ae6fdc11cdab1e0f26d743291073bdc6fe3d9221fd6: CDI devices from CRI Config.CDIDevices: []" May 16 00:14:18.975438 containerd[1508]: time="2025-05-16T00:14:18.975387593Z" level=info msg="CreateContainer within sandbox \"1416db4ca33ba572d305aab54d055c27cebcf258e8d7ae261b1aae6d1199cfe6\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"041ac35a859351488c045ae6fdc11cdab1e0f26d743291073bdc6fe3d9221fd6\"" May 16 00:14:18.977372 containerd[1508]: time="2025-05-16T00:14:18.975980773Z" level=info msg="StartContainer for \"041ac35a859351488c045ae6fdc11cdab1e0f26d743291073bdc6fe3d9221fd6\"" May 16 00:14:18.977372 containerd[1508]: time="2025-05-16T00:14:18.977064339Z" level=info msg="connecting to shim 041ac35a859351488c045ae6fdc11cdab1e0f26d743291073bdc6fe3d9221fd6" address="unix:///run/containerd/s/b2fce56c540d8a4a30c03ce57b4e34f287987a2afdacb803dcdbc0881f2c775e" protocol=ttrpc version=3 May 16 00:14:19.017500 systemd[1]: Started cri-containerd-041ac35a859351488c045ae6fdc11cdab1e0f26d743291073bdc6fe3d9221fd6.scope - libcontainer container 041ac35a859351488c045ae6fdc11cdab1e0f26d743291073bdc6fe3d9221fd6. May 16 00:14:19.049473 containerd[1508]: time="2025-05-16T00:14:19.049430479Z" level=info msg="StartContainer for \"041ac35a859351488c045ae6fdc11cdab1e0f26d743291073bdc6fe3d9221fd6\" returns successfully" May 16 00:14:19.302077 kubelet[2691]: E0516 00:14:19.302039 2691 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:14:19.311257 kubelet[2691]: I0516 00:14:19.311192 2691 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-844669ff44-thjn5" podStartSLOduration=2.6909501220000003 podStartE2EDuration="7.311173291s" podCreationTimestamp="2025-05-16 00:14:12 +0000 UTC" firstStartedPulling="2025-05-16 00:14:14.258758592 +0000 UTC m=+8.199212334" lastFinishedPulling="2025-05-16 00:14:18.878981761 +0000 UTC m=+12.819435503" observedRunningTime="2025-05-16 00:14:19.310955597 +0000 UTC m=+13.251409339" watchObservedRunningTime="2025-05-16 00:14:19.311173291 +0000 UTC m=+13.251627033" May 16 00:14:20.724943 kubelet[2691]: E0516 00:14:20.724901 2691 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:14:21.862277 systemd[1]: cri-containerd-041ac35a859351488c045ae6fdc11cdab1e0f26d743291073bdc6fe3d9221fd6.scope: Deactivated successfully. May 16 00:14:21.865189 containerd[1508]: time="2025-05-16T00:14:21.865047270Z" level=info msg="received exit event container_id:\"041ac35a859351488c045ae6fdc11cdab1e0f26d743291073bdc6fe3d9221fd6\" id:\"041ac35a859351488c045ae6fdc11cdab1e0f26d743291073bdc6fe3d9221fd6\" pid:3011 exit_status:1 exited_at:{seconds:1747354461 nanos:864554438}" May 16 00:14:21.865189 containerd[1508]: time="2025-05-16T00:14:21.865126574Z" level=info msg="TaskExit event in podsandbox handler container_id:\"041ac35a859351488c045ae6fdc11cdab1e0f26d743291073bdc6fe3d9221fd6\" id:\"041ac35a859351488c045ae6fdc11cdab1e0f26d743291073bdc6fe3d9221fd6\" pid:3011 exit_status:1 exited_at:{seconds:1747354461 nanos:864554438}" May 16 00:14:21.891012 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-041ac35a859351488c045ae6fdc11cdab1e0f26d743291073bdc6fe3d9221fd6-rootfs.mount: Deactivated successfully. May 16 00:14:23.314172 kubelet[2691]: I0516 00:14:23.314118 2691 scope.go:117] "RemoveContainer" containerID="041ac35a859351488c045ae6fdc11cdab1e0f26d743291073bdc6fe3d9221fd6" May 16 00:14:23.316476 containerd[1508]: time="2025-05-16T00:14:23.316435246Z" level=info msg="CreateContainer within sandbox \"1416db4ca33ba572d305aab54d055c27cebcf258e8d7ae261b1aae6d1199cfe6\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" May 16 00:14:23.438495 containerd[1508]: time="2025-05-16T00:14:23.438454184Z" level=info msg="Container c9b28810512eed5e7fe7e89cd952639ac8b83d82709e7302b4e49d6820f5c017: CDI devices from CRI Config.CDIDevices: []" May 16 00:14:23.535388 containerd[1508]: time="2025-05-16T00:14:23.535305292Z" level=info msg="CreateContainer within sandbox \"1416db4ca33ba572d305aab54d055c27cebcf258e8d7ae261b1aae6d1199cfe6\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"c9b28810512eed5e7fe7e89cd952639ac8b83d82709e7302b4e49d6820f5c017\"" May 16 00:14:23.538784 containerd[1508]: time="2025-05-16T00:14:23.538739058Z" level=info msg="StartContainer for \"c9b28810512eed5e7fe7e89cd952639ac8b83d82709e7302b4e49d6820f5c017\"" May 16 00:14:23.539810 containerd[1508]: time="2025-05-16T00:14:23.539773381Z" level=info msg="connecting to shim c9b28810512eed5e7fe7e89cd952639ac8b83d82709e7302b4e49d6820f5c017" address="unix:///run/containerd/s/b2fce56c540d8a4a30c03ce57b4e34f287987a2afdacb803dcdbc0881f2c775e" protocol=ttrpc version=3 May 16 00:14:23.562071 systemd[1]: Started cri-containerd-c9b28810512eed5e7fe7e89cd952639ac8b83d82709e7302b4e49d6820f5c017.scope - libcontainer container c9b28810512eed5e7fe7e89cd952639ac8b83d82709e7302b4e49d6820f5c017. May 16 00:14:23.674091 containerd[1508]: time="2025-05-16T00:14:23.673943662Z" level=info msg="StartContainer for \"c9b28810512eed5e7fe7e89cd952639ac8b83d82709e7302b4e49d6820f5c017\" returns successfully" May 16 00:14:26.126570 sudo[1741]: pam_unix(sudo:session): session closed for user root May 16 00:14:26.128143 sshd[1740]: Connection closed by 10.0.0.1 port 54470 May 16 00:14:26.132805 sshd-session[1737]: pam_unix(sshd:session): session closed for user core May 16 00:14:26.141008 systemd[1]: sshd@8-10.0.0.81:22-10.0.0.1:54470.service: Deactivated successfully. May 16 00:14:26.143752 systemd[1]: session-9.scope: Deactivated successfully. May 16 00:14:26.144031 systemd[1]: session-9.scope: Consumed 5.125s CPU time, 221.8M memory peak. May 16 00:14:26.145811 systemd-logind[1490]: Session 9 logged out. Waiting for processes to exit. May 16 00:14:26.148165 systemd-logind[1490]: Removed session 9. May 16 00:14:31.596306 systemd[1]: Created slice kubepods-besteffort-podf9a0255b_a968_4af7_9c40_1f2af395ee7d.slice - libcontainer container kubepods-besteffort-podf9a0255b_a968_4af7_9c40_1f2af395ee7d.slice. May 16 00:14:31.658891 kubelet[2691]: I0516 00:14:31.658811 2691 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/f9a0255b-a968-4af7-9c40-1f2af395ee7d-typha-certs\") pod \"calico-typha-77f46dbf7f-r8llt\" (UID: \"f9a0255b-a968-4af7-9c40-1f2af395ee7d\") " pod="calico-system/calico-typha-77f46dbf7f-r8llt" May 16 00:14:31.658891 kubelet[2691]: I0516 00:14:31.658870 2691 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2w2w7\" (UniqueName: \"kubernetes.io/projected/f9a0255b-a968-4af7-9c40-1f2af395ee7d-kube-api-access-2w2w7\") pod \"calico-typha-77f46dbf7f-r8llt\" (UID: \"f9a0255b-a968-4af7-9c40-1f2af395ee7d\") " pod="calico-system/calico-typha-77f46dbf7f-r8llt" May 16 00:14:31.658891 kubelet[2691]: I0516 00:14:31.658897 2691 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f9a0255b-a968-4af7-9c40-1f2af395ee7d-tigera-ca-bundle\") pod \"calico-typha-77f46dbf7f-r8llt\" (UID: \"f9a0255b-a968-4af7-9c40-1f2af395ee7d\") " pod="calico-system/calico-typha-77f46dbf7f-r8llt" May 16 00:14:31.885748 kubelet[2691]: W0516 00:14:31.885619 2691 reflector.go:569] object-"calico-system"/"node-certs": failed to list *v1.Secret: secrets "node-certs" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'localhost' and this object May 16 00:14:31.886467 kubelet[2691]: E0516 00:14:31.886310 2691 reflector.go:166] "Unhandled Error" err="object-\"calico-system\"/\"node-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"node-certs\" is forbidden: User \"system:node:localhost\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" May 16 00:14:31.899764 systemd[1]: Created slice kubepods-besteffort-pod125e605e_55d8_46fe_90c4_cb427f89fa90.slice - libcontainer container kubepods-besteffort-pod125e605e_55d8_46fe_90c4_cb427f89fa90.slice. May 16 00:14:31.906068 kubelet[2691]: E0516 00:14:31.906024 2691 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:14:31.906787 containerd[1508]: time="2025-05-16T00:14:31.906731248Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-77f46dbf7f-r8llt,Uid:f9a0255b-a968-4af7-9c40-1f2af395ee7d,Namespace:calico-system,Attempt:0,}" May 16 00:14:31.962095 containerd[1508]: time="2025-05-16T00:14:31.962040679Z" level=info msg="connecting to shim 438b0431450f8a6f241fb72bacf6fcf0a9c1b012e25afbbfaddcf2a7dee57780" address="unix:///run/containerd/s/3724cc77fe160cfeefc7ed4ce4c9c5bce27e815a120860b04ca2e7448d3b87d0" namespace=k8s.io protocol=ttrpc version=3 May 16 00:14:31.989580 systemd[1]: Started cri-containerd-438b0431450f8a6f241fb72bacf6fcf0a9c1b012e25afbbfaddcf2a7dee57780.scope - libcontainer container 438b0431450f8a6f241fb72bacf6fcf0a9c1b012e25afbbfaddcf2a7dee57780. May 16 00:14:32.061200 kubelet[2691]: I0516 00:14:32.061154 2691 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/125e605e-55d8-46fe-90c4-cb427f89fa90-lib-modules\") pod \"calico-node-8pxfz\" (UID: \"125e605e-55d8-46fe-90c4-cb427f89fa90\") " pod="calico-system/calico-node-8pxfz" May 16 00:14:32.061200 kubelet[2691]: I0516 00:14:32.061197 2691 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/125e605e-55d8-46fe-90c4-cb427f89fa90-cni-net-dir\") pod \"calico-node-8pxfz\" (UID: \"125e605e-55d8-46fe-90c4-cb427f89fa90\") " pod="calico-system/calico-node-8pxfz" May 16 00:14:32.061200 kubelet[2691]: I0516 00:14:32.061227 2691 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/125e605e-55d8-46fe-90c4-cb427f89fa90-cni-bin-dir\") pod \"calico-node-8pxfz\" (UID: \"125e605e-55d8-46fe-90c4-cb427f89fa90\") " pod="calico-system/calico-node-8pxfz" May 16 00:14:32.061500 kubelet[2691]: I0516 00:14:32.061241 2691 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/125e605e-55d8-46fe-90c4-cb427f89fa90-policysync\") pod \"calico-node-8pxfz\" (UID: \"125e605e-55d8-46fe-90c4-cb427f89fa90\") " pod="calico-system/calico-node-8pxfz" May 16 00:14:32.061500 kubelet[2691]: I0516 00:14:32.061278 2691 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mnl7z\" (UniqueName: \"kubernetes.io/projected/125e605e-55d8-46fe-90c4-cb427f89fa90-kube-api-access-mnl7z\") pod \"calico-node-8pxfz\" (UID: \"125e605e-55d8-46fe-90c4-cb427f89fa90\") " pod="calico-system/calico-node-8pxfz" May 16 00:14:32.061500 kubelet[2691]: I0516 00:14:32.061323 2691 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/125e605e-55d8-46fe-90c4-cb427f89fa90-var-run-calico\") pod \"calico-node-8pxfz\" (UID: \"125e605e-55d8-46fe-90c4-cb427f89fa90\") " pod="calico-system/calico-node-8pxfz" May 16 00:14:32.061500 kubelet[2691]: I0516 00:14:32.061349 2691 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/125e605e-55d8-46fe-90c4-cb427f89fa90-xtables-lock\") pod \"calico-node-8pxfz\" (UID: \"125e605e-55d8-46fe-90c4-cb427f89fa90\") " pod="calico-system/calico-node-8pxfz" May 16 00:14:32.061500 kubelet[2691]: I0516 00:14:32.061399 2691 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/125e605e-55d8-46fe-90c4-cb427f89fa90-flexvol-driver-host\") pod \"calico-node-8pxfz\" (UID: \"125e605e-55d8-46fe-90c4-cb427f89fa90\") " pod="calico-system/calico-node-8pxfz" May 16 00:14:32.061662 kubelet[2691]: I0516 00:14:32.061435 2691 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/125e605e-55d8-46fe-90c4-cb427f89fa90-cni-log-dir\") pod \"calico-node-8pxfz\" (UID: \"125e605e-55d8-46fe-90c4-cb427f89fa90\") " pod="calico-system/calico-node-8pxfz" May 16 00:14:32.061662 kubelet[2691]: I0516 00:14:32.061460 2691 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/125e605e-55d8-46fe-90c4-cb427f89fa90-var-lib-calico\") pod \"calico-node-8pxfz\" (UID: \"125e605e-55d8-46fe-90c4-cb427f89fa90\") " pod="calico-system/calico-node-8pxfz" May 16 00:14:32.061662 kubelet[2691]: I0516 00:14:32.061484 2691 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/125e605e-55d8-46fe-90c4-cb427f89fa90-node-certs\") pod \"calico-node-8pxfz\" (UID: \"125e605e-55d8-46fe-90c4-cb427f89fa90\") " pod="calico-system/calico-node-8pxfz" May 16 00:14:32.061662 kubelet[2691]: I0516 00:14:32.061515 2691 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/125e605e-55d8-46fe-90c4-cb427f89fa90-tigera-ca-bundle\") pod \"calico-node-8pxfz\" (UID: \"125e605e-55d8-46fe-90c4-cb427f89fa90\") " pod="calico-system/calico-node-8pxfz" May 16 00:14:32.072680 containerd[1508]: time="2025-05-16T00:14:32.072636848Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-77f46dbf7f-r8llt,Uid:f9a0255b-a968-4af7-9c40-1f2af395ee7d,Namespace:calico-system,Attempt:0,} returns sandbox id \"438b0431450f8a6f241fb72bacf6fcf0a9c1b012e25afbbfaddcf2a7dee57780\"" May 16 00:14:32.073450 kubelet[2691]: E0516 00:14:32.073416 2691 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:14:32.074399 containerd[1508]: time="2025-05-16T00:14:32.074353915Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.0\"" May 16 00:14:32.170517 kubelet[2691]: E0516 00:14:32.170393 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:14:32.170517 kubelet[2691]: W0516 00:14:32.170426 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:14:32.170517 kubelet[2691]: E0516 00:14:32.170469 2691 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:14:32.172012 kubelet[2691]: E0516 00:14:32.171998 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:14:32.172012 kubelet[2691]: W0516 00:14:32.172010 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:14:32.172088 kubelet[2691]: E0516 00:14:32.172021 2691 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:14:32.315082 kubelet[2691]: E0516 00:14:32.315015 2691 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hz4bz" podUID="cda0ce1d-952c-4f48-bbf6-c0ac3e9334ff" May 16 00:14:32.362669 kubelet[2691]: E0516 00:14:32.362626 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:14:32.362669 kubelet[2691]: W0516 00:14:32.362653 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:14:32.362669 kubelet[2691]: E0516 00:14:32.362676 2691 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:14:32.363074 kubelet[2691]: E0516 00:14:32.363037 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:14:32.363074 kubelet[2691]: W0516 00:14:32.363059 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:14:32.363074 kubelet[2691]: E0516 00:14:32.363069 2691 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:14:32.363433 kubelet[2691]: E0516 00:14:32.363395 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:14:32.363501 kubelet[2691]: W0516 00:14:32.363432 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:14:32.363501 kubelet[2691]: E0516 00:14:32.363469 2691 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:14:32.363871 kubelet[2691]: E0516 00:14:32.363835 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:14:32.363871 kubelet[2691]: W0516 00:14:32.363862 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:14:32.363940 kubelet[2691]: E0516 00:14:32.363874 2691 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:14:32.364165 kubelet[2691]: E0516 00:14:32.364146 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:14:32.364165 kubelet[2691]: W0516 00:14:32.364160 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:14:32.364231 kubelet[2691]: E0516 00:14:32.364172 2691 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:14:32.364474 kubelet[2691]: E0516 00:14:32.364429 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:14:32.364474 kubelet[2691]: W0516 00:14:32.364445 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:14:32.364474 kubelet[2691]: E0516 00:14:32.364457 2691 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:14:32.364664 kubelet[2691]: E0516 00:14:32.364644 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:14:32.364664 kubelet[2691]: W0516 00:14:32.364655 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:14:32.364664 kubelet[2691]: E0516 00:14:32.364663 2691 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:14:32.364871 kubelet[2691]: E0516 00:14:32.364832 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:14:32.364871 kubelet[2691]: W0516 00:14:32.364854 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:14:32.364871 kubelet[2691]: E0516 00:14:32.364862 2691 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:14:32.365083 kubelet[2691]: E0516 00:14:32.365064 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:14:32.365083 kubelet[2691]: W0516 00:14:32.365076 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:14:32.365150 kubelet[2691]: E0516 00:14:32.365088 2691 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:14:32.365271 kubelet[2691]: E0516 00:14:32.365253 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:14:32.365271 kubelet[2691]: W0516 00:14:32.365263 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:14:32.365271 kubelet[2691]: E0516 00:14:32.365270 2691 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:14:32.365463 kubelet[2691]: E0516 00:14:32.365446 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:14:32.365463 kubelet[2691]: W0516 00:14:32.365457 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:14:32.365463 kubelet[2691]: E0516 00:14:32.365464 2691 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:14:32.365645 kubelet[2691]: E0516 00:14:32.365629 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:14:32.365645 kubelet[2691]: W0516 00:14:32.365640 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:14:32.365713 kubelet[2691]: E0516 00:14:32.365647 2691 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:14:32.365856 kubelet[2691]: E0516 00:14:32.365828 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:14:32.365856 kubelet[2691]: W0516 00:14:32.365849 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:14:32.365856 kubelet[2691]: E0516 00:14:32.365857 2691 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:14:32.366023 kubelet[2691]: E0516 00:14:32.366009 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:14:32.366023 kubelet[2691]: W0516 00:14:32.366019 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:14:32.366086 kubelet[2691]: E0516 00:14:32.366026 2691 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:14:32.366189 kubelet[2691]: E0516 00:14:32.366174 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:14:32.366189 kubelet[2691]: W0516 00:14:32.366184 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:14:32.366253 kubelet[2691]: E0516 00:14:32.366192 2691 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:14:32.366885 kubelet[2691]: E0516 00:14:32.366452 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:14:32.366885 kubelet[2691]: W0516 00:14:32.366472 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:14:32.366885 kubelet[2691]: E0516 00:14:32.366482 2691 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:14:32.366885 kubelet[2691]: E0516 00:14:32.366706 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:14:32.366885 kubelet[2691]: W0516 00:14:32.366714 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:14:32.366885 kubelet[2691]: E0516 00:14:32.366722 2691 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:14:32.367572 kubelet[2691]: E0516 00:14:32.366908 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:14:32.367572 kubelet[2691]: W0516 00:14:32.366918 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:14:32.367572 kubelet[2691]: E0516 00:14:32.366926 2691 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:14:32.367572 kubelet[2691]: E0516 00:14:32.367094 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:14:32.367572 kubelet[2691]: W0516 00:14:32.367102 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:14:32.367572 kubelet[2691]: E0516 00:14:32.367109 2691 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:14:32.368508 kubelet[2691]: E0516 00:14:32.368486 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:14:32.368643 kubelet[2691]: W0516 00:14:32.368581 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:14:32.368643 kubelet[2691]: E0516 00:14:32.368610 2691 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:14:32.369439 kubelet[2691]: E0516 00:14:32.369248 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:14:32.369439 kubelet[2691]: W0516 00:14:32.369266 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:14:32.369439 kubelet[2691]: E0516 00:14:32.369280 2691 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:14:32.369439 kubelet[2691]: I0516 00:14:32.369313 2691 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cda0ce1d-952c-4f48-bbf6-c0ac3e9334ff-kubelet-dir\") pod \"csi-node-driver-hz4bz\" (UID: \"cda0ce1d-952c-4f48-bbf6-c0ac3e9334ff\") " pod="calico-system/csi-node-driver-hz4bz" May 16 00:14:32.369783 kubelet[2691]: E0516 00:14:32.369752 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:14:32.369783 kubelet[2691]: W0516 00:14:32.369766 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:14:32.370012 kubelet[2691]: E0516 00:14:32.369902 2691 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:14:32.370012 kubelet[2691]: I0516 00:14:32.369929 2691 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/cda0ce1d-952c-4f48-bbf6-c0ac3e9334ff-socket-dir\") pod \"csi-node-driver-hz4bz\" (UID: \"cda0ce1d-952c-4f48-bbf6-c0ac3e9334ff\") " pod="calico-system/csi-node-driver-hz4bz" May 16 00:14:32.370474 kubelet[2691]: E0516 00:14:32.370434 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:14:32.370474 kubelet[2691]: W0516 00:14:32.370453 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:14:32.370474 kubelet[2691]: E0516 00:14:32.370472 2691 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:14:32.370689 kubelet[2691]: I0516 00:14:32.370494 2691 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4pfsq\" (UniqueName: \"kubernetes.io/projected/cda0ce1d-952c-4f48-bbf6-c0ac3e9334ff-kube-api-access-4pfsq\") pod \"csi-node-driver-hz4bz\" (UID: \"cda0ce1d-952c-4f48-bbf6-c0ac3e9334ff\") " pod="calico-system/csi-node-driver-hz4bz" May 16 00:14:32.370832 kubelet[2691]: E0516 00:14:32.370776 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:14:32.370832 kubelet[2691]: W0516 00:14:32.370795 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:14:32.370900 kubelet[2691]: E0516 00:14:32.370819 2691 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:14:32.371087 kubelet[2691]: E0516 00:14:32.371060 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:14:32.371087 kubelet[2691]: W0516 00:14:32.371079 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:14:32.371212 kubelet[2691]: E0516 00:14:32.371099 2691 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:14:32.371462 kubelet[2691]: E0516 00:14:32.371372 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:14:32.371462 kubelet[2691]: W0516 00:14:32.371386 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:14:32.371462 kubelet[2691]: E0516 00:14:32.371401 2691 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:14:32.371644 kubelet[2691]: E0516 00:14:32.371620 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:14:32.371644 kubelet[2691]: W0516 00:14:32.371635 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:14:32.371703 kubelet[2691]: E0516 00:14:32.371649 2691 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:14:32.371906 kubelet[2691]: E0516 00:14:32.371879 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:14:32.371906 kubelet[2691]: W0516 00:14:32.371894 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:14:32.371972 kubelet[2691]: E0516 00:14:32.371908 2691 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:14:32.371972 kubelet[2691]: I0516 00:14:32.371929 2691 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/cda0ce1d-952c-4f48-bbf6-c0ac3e9334ff-registration-dir\") pod \"csi-node-driver-hz4bz\" (UID: \"cda0ce1d-952c-4f48-bbf6-c0ac3e9334ff\") " pod="calico-system/csi-node-driver-hz4bz" May 16 00:14:32.372231 kubelet[2691]: E0516 00:14:32.372200 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:14:32.372231 kubelet[2691]: W0516 00:14:32.372216 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:14:32.372302 kubelet[2691]: E0516 00:14:32.372248 2691 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:14:32.372302 kubelet[2691]: I0516 00:14:32.372285 2691 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/cda0ce1d-952c-4f48-bbf6-c0ac3e9334ff-varrun\") pod \"csi-node-driver-hz4bz\" (UID: \"cda0ce1d-952c-4f48-bbf6-c0ac3e9334ff\") " pod="calico-system/csi-node-driver-hz4bz" May 16 00:14:32.372528 kubelet[2691]: E0516 00:14:32.372509 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:14:32.372528 kubelet[2691]: W0516 00:14:32.372524 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:14:32.372607 kubelet[2691]: E0516 00:14:32.372556 2691 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:14:32.372772 kubelet[2691]: E0516 00:14:32.372753 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:14:32.372772 kubelet[2691]: W0516 00:14:32.372767 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:14:32.372833 kubelet[2691]: E0516 00:14:32.372782 2691 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:14:32.373031 kubelet[2691]: E0516 00:14:32.373013 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:14:32.373031 kubelet[2691]: W0516 00:14:32.373026 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:14:32.373100 kubelet[2691]: E0516 00:14:32.373045 2691 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:14:32.373271 kubelet[2691]: E0516 00:14:32.373254 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:14:32.373271 kubelet[2691]: W0516 00:14:32.373266 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:14:32.373332 kubelet[2691]: E0516 00:14:32.373277 2691 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:14:32.373550 kubelet[2691]: E0516 00:14:32.373532 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:14:32.373550 kubelet[2691]: W0516 00:14:32.373545 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:14:32.373618 kubelet[2691]: E0516 00:14:32.373556 2691 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:14:32.373784 kubelet[2691]: E0516 00:14:32.373766 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:14:32.373784 kubelet[2691]: W0516 00:14:32.373779 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:14:32.373864 kubelet[2691]: E0516 00:14:32.373790 2691 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:14:32.473061 kubelet[2691]: E0516 00:14:32.472910 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:14:32.473061 kubelet[2691]: W0516 00:14:32.472943 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:14:32.473061 kubelet[2691]: E0516 00:14:32.472969 2691 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:14:32.473382 kubelet[2691]: E0516 00:14:32.473339 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:14:32.473382 kubelet[2691]: W0516 00:14:32.473355 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:14:32.473465 kubelet[2691]: E0516 00:14:32.473392 2691 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:14:32.473733 kubelet[2691]: E0516 00:14:32.473697 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:14:32.473733 kubelet[2691]: W0516 00:14:32.473724 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:14:32.473826 kubelet[2691]: E0516 00:14:32.473751 2691 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:14:32.474036 kubelet[2691]: E0516 00:14:32.474014 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:14:32.474036 kubelet[2691]: W0516 00:14:32.474034 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:14:32.474116 kubelet[2691]: E0516 00:14:32.474049 2691 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:14:32.474295 kubelet[2691]: E0516 00:14:32.474272 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:14:32.474295 kubelet[2691]: W0516 00:14:32.474291 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:14:32.474397 kubelet[2691]: E0516 00:14:32.474311 2691 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:14:32.474656 kubelet[2691]: E0516 00:14:32.474638 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:14:32.474656 kubelet[2691]: W0516 00:14:32.474653 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:14:32.474720 kubelet[2691]: E0516 00:14:32.474670 2691 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:14:32.474963 kubelet[2691]: E0516 00:14:32.474944 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:14:32.474963 kubelet[2691]: W0516 00:14:32.474960 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:14:32.475037 kubelet[2691]: E0516 00:14:32.474977 2691 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:14:32.475244 kubelet[2691]: E0516 00:14:32.475224 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:14:32.475244 kubelet[2691]: W0516 00:14:32.475238 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:14:32.475314 kubelet[2691]: E0516 00:14:32.475254 2691 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:14:32.475509 kubelet[2691]: E0516 00:14:32.475492 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:14:32.475509 kubelet[2691]: W0516 00:14:32.475503 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:14:32.475591 kubelet[2691]: E0516 00:14:32.475516 2691 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:14:32.475732 kubelet[2691]: E0516 00:14:32.475715 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:14:32.475732 kubelet[2691]: W0516 00:14:32.475726 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:14:32.475800 kubelet[2691]: E0516 00:14:32.475737 2691 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:14:32.475989 kubelet[2691]: E0516 00:14:32.475973 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:14:32.475989 kubelet[2691]: W0516 00:14:32.475984 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:14:32.476067 kubelet[2691]: E0516 00:14:32.476000 2691 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:14:32.476258 kubelet[2691]: E0516 00:14:32.476242 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:14:32.476258 kubelet[2691]: W0516 00:14:32.476255 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:14:32.476309 kubelet[2691]: E0516 00:14:32.476271 2691 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:14:32.476577 kubelet[2691]: E0516 00:14:32.476557 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:14:32.476577 kubelet[2691]: W0516 00:14:32.476575 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:14:32.476644 kubelet[2691]: E0516 00:14:32.476597 2691 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:14:32.476891 kubelet[2691]: E0516 00:14:32.476859 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:14:32.476891 kubelet[2691]: W0516 00:14:32.476875 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:14:32.476891 kubelet[2691]: E0516 00:14:32.476891 2691 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:14:32.477164 kubelet[2691]: E0516 00:14:32.477136 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:14:32.477164 kubelet[2691]: W0516 00:14:32.477153 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:14:32.477252 kubelet[2691]: E0516 00:14:32.477168 2691 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:14:32.477459 kubelet[2691]: E0516 00:14:32.477434 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:14:32.477459 kubelet[2691]: W0516 00:14:32.477450 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:14:32.477525 kubelet[2691]: E0516 00:14:32.477466 2691 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:14:32.477727 kubelet[2691]: E0516 00:14:32.477698 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:14:32.477727 kubelet[2691]: W0516 00:14:32.477714 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:14:32.477808 kubelet[2691]: E0516 00:14:32.477731 2691 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:14:32.478016 kubelet[2691]: E0516 00:14:32.477998 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:14:32.478016 kubelet[2691]: W0516 00:14:32.478012 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:14:32.478081 kubelet[2691]: E0516 00:14:32.478030 2691 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:14:32.478264 kubelet[2691]: E0516 00:14:32.478249 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:14:32.478264 kubelet[2691]: W0516 00:14:32.478262 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:14:32.478341 kubelet[2691]: E0516 00:14:32.478275 2691 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:14:32.478517 kubelet[2691]: E0516 00:14:32.478504 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:14:32.478517 kubelet[2691]: W0516 00:14:32.478514 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:14:32.478575 kubelet[2691]: E0516 00:14:32.478530 2691 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:14:32.478886 kubelet[2691]: E0516 00:14:32.478862 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:14:32.478886 kubelet[2691]: W0516 00:14:32.478873 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:14:32.478886 kubelet[2691]: E0516 00:14:32.478881 2691 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:14:32.479096 kubelet[2691]: E0516 00:14:32.479079 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:14:32.479096 kubelet[2691]: W0516 00:14:32.479089 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:14:32.479140 kubelet[2691]: E0516 00:14:32.479098 2691 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:14:32.479308 kubelet[2691]: E0516 00:14:32.479293 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:14:32.479308 kubelet[2691]: W0516 00:14:32.479303 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:14:32.479367 kubelet[2691]: E0516 00:14:32.479311 2691 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:14:32.479576 kubelet[2691]: E0516 00:14:32.479559 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:14:32.479576 kubelet[2691]: W0516 00:14:32.479568 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:14:32.479576 kubelet[2691]: E0516 00:14:32.479575 2691 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:14:32.488616 kubelet[2691]: E0516 00:14:32.488572 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:14:32.488616 kubelet[2691]: W0516 00:14:32.488603 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:14:32.488821 kubelet[2691]: E0516 00:14:32.488639 2691 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:14:32.492393 kubelet[2691]: E0516 00:14:32.492351 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:14:32.492514 kubelet[2691]: W0516 00:14:32.492406 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:14:32.492514 kubelet[2691]: E0516 00:14:32.492430 2691 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:14:33.164109 kubelet[2691]: E0516 00:14:33.164050 2691 secret.go:189] Couldn't get secret calico-system/node-certs: failed to sync secret cache: timed out waiting for the condition May 16 00:14:33.164614 kubelet[2691]: E0516 00:14:33.164197 2691 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/125e605e-55d8-46fe-90c4-cb427f89fa90-node-certs podName:125e605e-55d8-46fe-90c4-cb427f89fa90 nodeName:}" failed. No retries permitted until 2025-05-16 00:14:33.664140759 +0000 UTC m=+27.604594501 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-certs" (UniqueName: "kubernetes.io/secret/125e605e-55d8-46fe-90c4-cb427f89fa90-node-certs") pod "calico-node-8pxfz" (UID: "125e605e-55d8-46fe-90c4-cb427f89fa90") : failed to sync secret cache: timed out waiting for the condition May 16 00:14:33.183437 kubelet[2691]: E0516 00:14:33.183396 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:14:33.183437 kubelet[2691]: W0516 00:14:33.183421 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:14:33.183437 kubelet[2691]: E0516 00:14:33.183445 2691 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:14:33.284650 kubelet[2691]: E0516 00:14:33.284473 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:14:33.284650 kubelet[2691]: W0516 00:14:33.284514 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:14:33.284650 kubelet[2691]: E0516 00:14:33.284549 2691 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:14:33.385886 kubelet[2691]: E0516 00:14:33.385840 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:14:33.385886 kubelet[2691]: W0516 00:14:33.385866 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:14:33.385886 kubelet[2691]: E0516 00:14:33.385888 2691 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:14:33.486480 kubelet[2691]: E0516 00:14:33.486427 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:14:33.486480 kubelet[2691]: W0516 00:14:33.486450 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:14:33.486480 kubelet[2691]: E0516 00:14:33.486471 2691 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:14:33.587204 kubelet[2691]: E0516 00:14:33.587152 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:14:33.587204 kubelet[2691]: W0516 00:14:33.587176 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:14:33.587204 kubelet[2691]: E0516 00:14:33.587196 2691 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:14:33.688335 kubelet[2691]: E0516 00:14:33.688301 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:14:33.688335 kubelet[2691]: W0516 00:14:33.688322 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:14:33.688335 kubelet[2691]: E0516 00:14:33.688342 2691 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:14:33.688754 kubelet[2691]: E0516 00:14:33.688724 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:14:33.688754 kubelet[2691]: W0516 00:14:33.688746 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:14:33.688815 kubelet[2691]: E0516 00:14:33.688769 2691 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:14:33.689133 kubelet[2691]: E0516 00:14:33.689118 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:14:33.689133 kubelet[2691]: W0516 00:14:33.689129 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:14:33.689211 kubelet[2691]: E0516 00:14:33.689139 2691 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:14:33.689350 kubelet[2691]: E0516 00:14:33.689339 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:14:33.689350 kubelet[2691]: W0516 00:14:33.689348 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:14:33.689419 kubelet[2691]: E0516 00:14:33.689356 2691 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:14:33.689598 kubelet[2691]: E0516 00:14:33.689581 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:14:33.689598 kubelet[2691]: W0516 00:14:33.689596 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:14:33.689675 kubelet[2691]: E0516 00:14:33.689604 2691 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:14:33.811802 kubelet[2691]: E0516 00:14:33.811685 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:14:33.811802 kubelet[2691]: W0516 00:14:33.811712 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:14:33.811802 kubelet[2691]: E0516 00:14:33.811736 2691 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:14:34.003347 containerd[1508]: time="2025-05-16T00:14:34.003283402Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-8pxfz,Uid:125e605e-55d8-46fe-90c4-cb427f89fa90,Namespace:calico-system,Attempt:0,}" May 16 00:14:34.149143 kubelet[2691]: E0516 00:14:34.148992 2691 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hz4bz" podUID="cda0ce1d-952c-4f48-bbf6-c0ac3e9334ff" May 16 00:14:34.598412 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4131419623.mount: Deactivated successfully. May 16 00:14:36.105229 containerd[1508]: time="2025-05-16T00:14:36.105158575Z" level=info msg="connecting to shim bf0a011c45417fac4115ac65af9a950a36da1340e03038fc3b6268abe4d7a98c" address="unix:///run/containerd/s/f393a9d25e079fd071fa7e96c15e85946251eeb9c3108b2ec4a95ab5f72b4dbf" namespace=k8s.io protocol=ttrpc version=3 May 16 00:14:36.135696 systemd[1]: Started cri-containerd-bf0a011c45417fac4115ac65af9a950a36da1340e03038fc3b6268abe4d7a98c.scope - libcontainer container bf0a011c45417fac4115ac65af9a950a36da1340e03038fc3b6268abe4d7a98c. May 16 00:14:36.149230 kubelet[2691]: E0516 00:14:36.149177 2691 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hz4bz" podUID="cda0ce1d-952c-4f48-bbf6-c0ac3e9334ff" May 16 00:14:36.242137 containerd[1508]: time="2025-05-16T00:14:36.242089339Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-8pxfz,Uid:125e605e-55d8-46fe-90c4-cb427f89fa90,Namespace:calico-system,Attempt:0,} returns sandbox id \"bf0a011c45417fac4115ac65af9a950a36da1340e03038fc3b6268abe4d7a98c\"" May 16 00:14:36.676542 containerd[1508]: time="2025-05-16T00:14:36.676477971Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:14:36.684884 containerd[1508]: time="2025-05-16T00:14:36.684807569Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.0: active requests=0, bytes read=35158669" May 16 00:14:36.686832 containerd[1508]: time="2025-05-16T00:14:36.686791269Z" level=info msg="ImageCreate event name:\"sha256:71be0570e8645ac646675719e0da6ac33a05810991b31aecc303e7add70933be\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:14:36.689724 containerd[1508]: time="2025-05-16T00:14:36.689696442Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d282f6c773c4631b9dc8379eb093c54ca34c7728d55d6509cb45da5e1f5baf8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:14:36.690407 containerd[1508]: time="2025-05-16T00:14:36.690353610Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.0\" with image id \"sha256:71be0570e8645ac646675719e0da6ac33a05810991b31aecc303e7add70933be\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d282f6c773c4631b9dc8379eb093c54ca34c7728d55d6509cb45da5e1f5baf8f\", size \"35158523\" in 4.61594242s" May 16 00:14:36.690494 containerd[1508]: time="2025-05-16T00:14:36.690412500Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.0\" returns image reference \"sha256:71be0570e8645ac646675719e0da6ac33a05810991b31aecc303e7add70933be\"" May 16 00:14:36.694904 containerd[1508]: time="2025-05-16T00:14:36.694868388Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\"" May 16 00:14:36.716866 containerd[1508]: time="2025-05-16T00:14:36.716823421Z" level=info msg="CreateContainer within sandbox \"438b0431450f8a6f241fb72bacf6fcf0a9c1b012e25afbbfaddcf2a7dee57780\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" May 16 00:14:36.830186 containerd[1508]: time="2025-05-16T00:14:36.828750297Z" level=info msg="Container 4f2f3cc86e6be4c5896df050fec54efc7bb28b4e17d2db6202440696421ab946: CDI devices from CRI Config.CDIDevices: []" May 16 00:14:36.840260 containerd[1508]: time="2025-05-16T00:14:36.840197829Z" level=info msg="CreateContainer within sandbox \"438b0431450f8a6f241fb72bacf6fcf0a9c1b012e25afbbfaddcf2a7dee57780\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"4f2f3cc86e6be4c5896df050fec54efc7bb28b4e17d2db6202440696421ab946\"" May 16 00:14:36.840803 containerd[1508]: time="2025-05-16T00:14:36.840752511Z" level=info msg="StartContainer for \"4f2f3cc86e6be4c5896df050fec54efc7bb28b4e17d2db6202440696421ab946\"" May 16 00:14:36.842238 containerd[1508]: time="2025-05-16T00:14:36.842180336Z" level=info msg="connecting to shim 4f2f3cc86e6be4c5896df050fec54efc7bb28b4e17d2db6202440696421ab946" address="unix:///run/containerd/s/3724cc77fe160cfeefc7ed4ce4c9c5bce27e815a120860b04ca2e7448d3b87d0" protocol=ttrpc version=3 May 16 00:14:36.870555 systemd[1]: Started cri-containerd-4f2f3cc86e6be4c5896df050fec54efc7bb28b4e17d2db6202440696421ab946.scope - libcontainer container 4f2f3cc86e6be4c5896df050fec54efc7bb28b4e17d2db6202440696421ab946. May 16 00:14:36.928207 containerd[1508]: time="2025-05-16T00:14:36.927977651Z" level=info msg="StartContainer for \"4f2f3cc86e6be4c5896df050fec54efc7bb28b4e17d2db6202440696421ab946\" returns successfully" May 16 00:14:37.358843 kubelet[2691]: E0516 00:14:37.358696 2691 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:14:37.382591 kubelet[2691]: I0516 00:14:37.382513 2691 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-77f46dbf7f-r8llt" podStartSLOduration=1.7608912719999998 podStartE2EDuration="6.381486639s" podCreationTimestamp="2025-05-16 00:14:31 +0000 UTC" firstStartedPulling="2025-05-16 00:14:32.074133628 +0000 UTC m=+26.014587370" lastFinishedPulling="2025-05-16 00:14:36.694728995 +0000 UTC m=+30.635182737" observedRunningTime="2025-05-16 00:14:37.380811356 +0000 UTC m=+31.321265128" watchObservedRunningTime="2025-05-16 00:14:37.381486639 +0000 UTC m=+31.321940382" May 16 00:14:37.406254 kubelet[2691]: E0516 00:14:37.406210 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:14:37.406254 kubelet[2691]: W0516 00:14:37.406243 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:14:37.406494 kubelet[2691]: E0516 00:14:37.406272 2691 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:14:37.406659 kubelet[2691]: E0516 00:14:37.406636 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:14:37.406710 kubelet[2691]: W0516 00:14:37.406659 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:14:37.406710 kubelet[2691]: E0516 00:14:37.406685 2691 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:14:37.407023 kubelet[2691]: E0516 00:14:37.407006 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:14:37.407023 kubelet[2691]: W0516 00:14:37.407018 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:14:37.407104 kubelet[2691]: E0516 00:14:37.407029 2691 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:14:37.407322 kubelet[2691]: E0516 00:14:37.407297 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:14:37.407322 kubelet[2691]: W0516 00:14:37.407309 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:14:37.407322 kubelet[2691]: E0516 00:14:37.407320 2691 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:14:37.407578 kubelet[2691]: E0516 00:14:37.407562 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:14:37.407578 kubelet[2691]: W0516 00:14:37.407573 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:14:37.407662 kubelet[2691]: E0516 00:14:37.407586 2691 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:14:37.407807 kubelet[2691]: E0516 00:14:37.407791 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:14:37.407807 kubelet[2691]: W0516 00:14:37.407802 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:14:37.407878 kubelet[2691]: E0516 00:14:37.407813 2691 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:14:37.408022 kubelet[2691]: E0516 00:14:37.408006 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:14:37.408022 kubelet[2691]: W0516 00:14:37.408018 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:14:37.408093 kubelet[2691]: E0516 00:14:37.408028 2691 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:14:37.408233 kubelet[2691]: E0516 00:14:37.408218 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:14:37.408233 kubelet[2691]: W0516 00:14:37.408229 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:14:37.408304 kubelet[2691]: E0516 00:14:37.408239 2691 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:14:37.408493 kubelet[2691]: E0516 00:14:37.408473 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:14:37.408493 kubelet[2691]: W0516 00:14:37.408488 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:14:37.408584 kubelet[2691]: E0516 00:14:37.408501 2691 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:14:37.408713 kubelet[2691]: E0516 00:14:37.408698 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:14:37.408713 kubelet[2691]: W0516 00:14:37.408709 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:14:37.408782 kubelet[2691]: E0516 00:14:37.408719 2691 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:14:37.408911 kubelet[2691]: E0516 00:14:37.408896 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:14:37.408911 kubelet[2691]: W0516 00:14:37.408907 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:14:37.408985 kubelet[2691]: E0516 00:14:37.408917 2691 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:14:37.409118 kubelet[2691]: E0516 00:14:37.409102 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:14:37.409118 kubelet[2691]: W0516 00:14:37.409113 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:14:37.409187 kubelet[2691]: E0516 00:14:37.409123 2691 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:14:37.409334 kubelet[2691]: E0516 00:14:37.409319 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:14:37.409334 kubelet[2691]: W0516 00:14:37.409330 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:14:37.409432 kubelet[2691]: E0516 00:14:37.409339 2691 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:14:37.409578 kubelet[2691]: E0516 00:14:37.409563 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:14:37.409578 kubelet[2691]: W0516 00:14:37.409575 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:14:37.409653 kubelet[2691]: E0516 00:14:37.409585 2691 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:14:37.409790 kubelet[2691]: E0516 00:14:37.409774 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:14:37.409790 kubelet[2691]: W0516 00:14:37.409786 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:14:37.409861 kubelet[2691]: E0516 00:14:37.409795 2691 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:14:37.413188 kubelet[2691]: E0516 00:14:37.413155 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:14:37.413188 kubelet[2691]: W0516 00:14:37.413176 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:14:37.413294 kubelet[2691]: E0516 00:14:37.413191 2691 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:14:37.413512 kubelet[2691]: E0516 00:14:37.413475 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:14:37.413512 kubelet[2691]: W0516 00:14:37.413491 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:14:37.413512 kubelet[2691]: E0516 00:14:37.413507 2691 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:14:37.413813 kubelet[2691]: E0516 00:14:37.413792 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:14:37.413813 kubelet[2691]: W0516 00:14:37.413806 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:14:37.413865 kubelet[2691]: E0516 00:14:37.413820 2691 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:14:37.414071 kubelet[2691]: E0516 00:14:37.414048 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:14:37.414071 kubelet[2691]: W0516 00:14:37.414066 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:14:37.414119 kubelet[2691]: E0516 00:14:37.414083 2691 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:14:37.414316 kubelet[2691]: E0516 00:14:37.414297 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:14:37.414316 kubelet[2691]: W0516 00:14:37.414314 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:14:37.414399 kubelet[2691]: E0516 00:14:37.414329 2691 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:14:37.414641 kubelet[2691]: E0516 00:14:37.414625 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:14:37.414641 kubelet[2691]: W0516 00:14:37.414638 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:14:37.414691 kubelet[2691]: E0516 00:14:37.414652 2691 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:14:37.414874 kubelet[2691]: E0516 00:14:37.414858 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:14:37.414898 kubelet[2691]: W0516 00:14:37.414872 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:14:37.414898 kubelet[2691]: E0516 00:14:37.414891 2691 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:14:37.415107 kubelet[2691]: E0516 00:14:37.415092 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:14:37.415107 kubelet[2691]: W0516 00:14:37.415103 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:14:37.415164 kubelet[2691]: E0516 00:14:37.415118 2691 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:14:37.415415 kubelet[2691]: E0516 00:14:37.415400 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:14:37.415415 kubelet[2691]: W0516 00:14:37.415412 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:14:37.415482 kubelet[2691]: E0516 00:14:37.415426 2691 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:14:37.415736 kubelet[2691]: E0516 00:14:37.415719 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:14:37.415736 kubelet[2691]: W0516 00:14:37.415733 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:14:37.415799 kubelet[2691]: E0516 00:14:37.415749 2691 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:14:37.415959 kubelet[2691]: E0516 00:14:37.415944 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:14:37.415982 kubelet[2691]: W0516 00:14:37.415958 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:14:37.415982 kubelet[2691]: E0516 00:14:37.415972 2691 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:14:37.416192 kubelet[2691]: E0516 00:14:37.416177 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:14:37.416192 kubelet[2691]: W0516 00:14:37.416189 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:14:37.416238 kubelet[2691]: E0516 00:14:37.416203 2691 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:14:37.416466 kubelet[2691]: E0516 00:14:37.416410 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:14:37.416466 kubelet[2691]: W0516 00:14:37.416424 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:14:37.416466 kubelet[2691]: E0516 00:14:37.416441 2691 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:14:37.416733 kubelet[2691]: E0516 00:14:37.416711 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:14:37.416774 kubelet[2691]: W0516 00:14:37.416732 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:14:37.416774 kubelet[2691]: E0516 00:14:37.416750 2691 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:14:37.416991 kubelet[2691]: E0516 00:14:37.416974 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:14:37.417019 kubelet[2691]: W0516 00:14:37.416989 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:14:37.417019 kubelet[2691]: E0516 00:14:37.417006 2691 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:14:37.417319 kubelet[2691]: E0516 00:14:37.417296 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:14:37.417319 kubelet[2691]: W0516 00:14:37.417312 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:14:37.417379 kubelet[2691]: E0516 00:14:37.417329 2691 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:14:37.417651 kubelet[2691]: E0516 00:14:37.417623 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:14:37.417651 kubelet[2691]: W0516 00:14:37.417642 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:14:37.417707 kubelet[2691]: E0516 00:14:37.417664 2691 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:14:37.417915 kubelet[2691]: E0516 00:14:37.417889 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:14:37.417915 kubelet[2691]: W0516 00:14:37.417904 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:14:37.417915 kubelet[2691]: E0516 00:14:37.417915 2691 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:14:38.149668 kubelet[2691]: E0516 00:14:38.149597 2691 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hz4bz" podUID="cda0ce1d-952c-4f48-bbf6-c0ac3e9334ff" May 16 00:14:38.349079 kubelet[2691]: I0516 00:14:38.349040 2691 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 16 00:14:38.349400 kubelet[2691]: E0516 00:14:38.349352 2691 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:14:38.416954 kubelet[2691]: E0516 00:14:38.416829 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:14:38.416954 kubelet[2691]: W0516 00:14:38.416858 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:14:38.416954 kubelet[2691]: E0516 00:14:38.416881 2691 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:14:38.417482 kubelet[2691]: E0516 00:14:38.417109 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:14:38.417482 kubelet[2691]: W0516 00:14:38.417117 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:14:38.417482 kubelet[2691]: E0516 00:14:38.417124 2691 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:14:38.417482 kubelet[2691]: E0516 00:14:38.417449 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:14:38.417482 kubelet[2691]: W0516 00:14:38.417459 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:14:38.417482 kubelet[2691]: E0516 00:14:38.417469 2691 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:14:38.417723 kubelet[2691]: E0516 00:14:38.417698 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:14:38.417723 kubelet[2691]: W0516 00:14:38.417711 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:14:38.417723 kubelet[2691]: E0516 00:14:38.417721 2691 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:14:38.417947 kubelet[2691]: E0516 00:14:38.417923 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:14:38.417947 kubelet[2691]: W0516 00:14:38.417934 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:14:38.417947 kubelet[2691]: E0516 00:14:38.417942 2691 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:14:38.418159 kubelet[2691]: E0516 00:14:38.418136 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:14:38.418159 kubelet[2691]: W0516 00:14:38.418146 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:14:38.418159 kubelet[2691]: E0516 00:14:38.418155 2691 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:14:38.418379 kubelet[2691]: E0516 00:14:38.418342 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:14:38.418379 kubelet[2691]: W0516 00:14:38.418352 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:14:38.418379 kubelet[2691]: E0516 00:14:38.418374 2691 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:14:38.418587 kubelet[2691]: E0516 00:14:38.418561 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:14:38.418587 kubelet[2691]: W0516 00:14:38.418581 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:14:38.418587 kubelet[2691]: E0516 00:14:38.418589 2691 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:14:38.418803 kubelet[2691]: E0516 00:14:38.418788 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:14:38.418803 kubelet[2691]: W0516 00:14:38.418798 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:14:38.418803 kubelet[2691]: E0516 00:14:38.418805 2691 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:14:38.419103 kubelet[2691]: E0516 00:14:38.419073 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:14:38.419145 kubelet[2691]: W0516 00:14:38.419100 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:14:38.419145 kubelet[2691]: E0516 00:14:38.419128 2691 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:14:38.419408 kubelet[2691]: E0516 00:14:38.419394 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:14:38.419408 kubelet[2691]: W0516 00:14:38.419406 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:14:38.419453 kubelet[2691]: E0516 00:14:38.419415 2691 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:14:38.419639 kubelet[2691]: E0516 00:14:38.419622 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:14:38.419639 kubelet[2691]: W0516 00:14:38.419634 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:14:38.419694 kubelet[2691]: E0516 00:14:38.419644 2691 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:14:38.419893 kubelet[2691]: E0516 00:14:38.419876 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:14:38.419893 kubelet[2691]: W0516 00:14:38.419887 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:14:38.419950 kubelet[2691]: E0516 00:14:38.419897 2691 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:14:38.420106 kubelet[2691]: E0516 00:14:38.420094 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:14:38.420106 kubelet[2691]: W0516 00:14:38.420105 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:14:38.420156 kubelet[2691]: E0516 00:14:38.420112 2691 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:14:38.420292 kubelet[2691]: E0516 00:14:38.420282 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:14:38.420292 kubelet[2691]: W0516 00:14:38.420290 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:14:38.420338 kubelet[2691]: E0516 00:14:38.420300 2691 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:14:38.420606 kubelet[2691]: E0516 00:14:38.420589 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:14:38.420606 kubelet[2691]: W0516 00:14:38.420601 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:14:38.420659 kubelet[2691]: E0516 00:14:38.420611 2691 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:14:38.420851 kubelet[2691]: E0516 00:14:38.420833 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:14:38.420851 kubelet[2691]: W0516 00:14:38.420846 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:14:38.420904 kubelet[2691]: E0516 00:14:38.420860 2691 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:14:38.421108 kubelet[2691]: E0516 00:14:38.421083 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:14:38.421108 kubelet[2691]: W0516 00:14:38.421095 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:14:38.421191 kubelet[2691]: E0516 00:14:38.421108 2691 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:14:38.421396 kubelet[2691]: E0516 00:14:38.421342 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:14:38.421441 kubelet[2691]: W0516 00:14:38.421402 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:14:38.421441 kubelet[2691]: E0516 00:14:38.421422 2691 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:14:38.421656 kubelet[2691]: E0516 00:14:38.421643 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:14:38.421656 kubelet[2691]: W0516 00:14:38.421654 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:14:38.421809 kubelet[2691]: E0516 00:14:38.421668 2691 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:14:38.421895 kubelet[2691]: E0516 00:14:38.421876 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:14:38.421895 kubelet[2691]: W0516 00:14:38.421891 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:14:38.421951 kubelet[2691]: E0516 00:14:38.421911 2691 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:14:38.422139 kubelet[2691]: E0516 00:14:38.422119 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:14:38.422139 kubelet[2691]: W0516 00:14:38.422131 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:14:38.422208 kubelet[2691]: E0516 00:14:38.422145 2691 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:14:38.422353 kubelet[2691]: E0516 00:14:38.422336 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:14:38.422353 kubelet[2691]: W0516 00:14:38.422349 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:14:38.422427 kubelet[2691]: E0516 00:14:38.422398 2691 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:14:38.422592 kubelet[2691]: E0516 00:14:38.422566 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:14:38.422592 kubelet[2691]: W0516 00:14:38.422588 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:14:38.422657 kubelet[2691]: E0516 00:14:38.422620 2691 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:14:38.422829 kubelet[2691]: E0516 00:14:38.422813 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:14:38.422829 kubelet[2691]: W0516 00:14:38.422823 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:14:38.422885 kubelet[2691]: E0516 00:14:38.422836 2691 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:14:38.423119 kubelet[2691]: E0516 00:14:38.423105 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:14:38.423151 kubelet[2691]: W0516 00:14:38.423119 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:14:38.423151 kubelet[2691]: E0516 00:14:38.423135 2691 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:14:38.423353 kubelet[2691]: E0516 00:14:38.423339 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:14:38.423353 kubelet[2691]: W0516 00:14:38.423353 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:14:38.423419 kubelet[2691]: E0516 00:14:38.423378 2691 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:14:38.423590 kubelet[2691]: E0516 00:14:38.423576 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:14:38.423590 kubelet[2691]: W0516 00:14:38.423587 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:14:38.423643 kubelet[2691]: E0516 00:14:38.423600 2691 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:14:38.423817 kubelet[2691]: E0516 00:14:38.423803 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:14:38.423817 kubelet[2691]: W0516 00:14:38.423813 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:14:38.423867 kubelet[2691]: E0516 00:14:38.423826 2691 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:14:38.424055 kubelet[2691]: E0516 00:14:38.424038 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:14:38.424055 kubelet[2691]: W0516 00:14:38.424052 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:14:38.424118 kubelet[2691]: E0516 00:14:38.424068 2691 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:14:38.424313 kubelet[2691]: E0516 00:14:38.424301 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:14:38.424373 kubelet[2691]: W0516 00:14:38.424313 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:14:38.424373 kubelet[2691]: E0516 00:14:38.424330 2691 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:14:38.424615 kubelet[2691]: E0516 00:14:38.424598 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:14:38.424615 kubelet[2691]: W0516 00:14:38.424610 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:14:38.424681 kubelet[2691]: E0516 00:14:38.424619 2691 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:14:38.424872 kubelet[2691]: E0516 00:14:38.424858 2691 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 00:14:38.424895 kubelet[2691]: W0516 00:14:38.424871 2691 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 00:14:38.424895 kubelet[2691]: E0516 00:14:38.424882 2691 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 00:14:38.992168 containerd[1508]: time="2025-05-16T00:14:38.992096909Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:14:39.002101 containerd[1508]: time="2025-05-16T00:14:39.002022984Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0: active requests=0, bytes read=4441619" May 16 00:14:39.005230 containerd[1508]: time="2025-05-16T00:14:39.005199839Z" level=info msg="ImageCreate event name:\"sha256:c53606cea03e59dcbfa981dc43a55dff05952895f72576b8389fa00be09ab676\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:14:39.017684 containerd[1508]: time="2025-05-16T00:14:39.017619557Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:ce76dd87f11d3fd0054c35ad2e0e9f833748d007f77a9bfe859d0ddcb66fcb2c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:14:39.018462 containerd[1508]: time="2025-05-16T00:14:39.018420721Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\" with image id \"sha256:c53606cea03e59dcbfa981dc43a55dff05952895f72576b8389fa00be09ab676\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:ce76dd87f11d3fd0054c35ad2e0e9f833748d007f77a9bfe859d0ddcb66fcb2c\", size \"5934282\" in 2.323514207s" May 16 00:14:39.018511 containerd[1508]: time="2025-05-16T00:14:39.018459269Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\" returns image reference \"sha256:c53606cea03e59dcbfa981dc43a55dff05952895f72576b8389fa00be09ab676\"" May 16 00:14:39.020737 containerd[1508]: time="2025-05-16T00:14:39.020298537Z" level=info msg="CreateContainer within sandbox \"bf0a011c45417fac4115ac65af9a950a36da1340e03038fc3b6268abe4d7a98c\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 16 00:14:39.091857 containerd[1508]: time="2025-05-16T00:14:39.091795702Z" level=info msg="Container 1e7c82100f09222c70b04da0fd235f5ffaa4ad9d7ec66da88a228f96826aafb9: CDI devices from CRI Config.CDIDevices: []" May 16 00:14:39.188123 containerd[1508]: time="2025-05-16T00:14:39.188070698Z" level=info msg="CreateContainer within sandbox \"bf0a011c45417fac4115ac65af9a950a36da1340e03038fc3b6268abe4d7a98c\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"1e7c82100f09222c70b04da0fd235f5ffaa4ad9d7ec66da88a228f96826aafb9\"" May 16 00:14:39.188506 containerd[1508]: time="2025-05-16T00:14:39.188457428Z" level=info msg="StartContainer for \"1e7c82100f09222c70b04da0fd235f5ffaa4ad9d7ec66da88a228f96826aafb9\"" May 16 00:14:39.189918 containerd[1508]: time="2025-05-16T00:14:39.189881068Z" level=info msg="connecting to shim 1e7c82100f09222c70b04da0fd235f5ffaa4ad9d7ec66da88a228f96826aafb9" address="unix:///run/containerd/s/f393a9d25e079fd071fa7e96c15e85946251eeb9c3108b2ec4a95ab5f72b4dbf" protocol=ttrpc version=3 May 16 00:14:39.211516 systemd[1]: Started cri-containerd-1e7c82100f09222c70b04da0fd235f5ffaa4ad9d7ec66da88a228f96826aafb9.scope - libcontainer container 1e7c82100f09222c70b04da0fd235f5ffaa4ad9d7ec66da88a228f96826aafb9. May 16 00:14:39.271580 systemd[1]: cri-containerd-1e7c82100f09222c70b04da0fd235f5ffaa4ad9d7ec66da88a228f96826aafb9.scope: Deactivated successfully. May 16 00:14:39.274283 containerd[1508]: time="2025-05-16T00:14:39.274187286Z" level=info msg="StartContainer for \"1e7c82100f09222c70b04da0fd235f5ffaa4ad9d7ec66da88a228f96826aafb9\" returns successfully" May 16 00:14:39.274283 containerd[1508]: time="2025-05-16T00:14:39.274191284Z" level=info msg="received exit event container_id:\"1e7c82100f09222c70b04da0fd235f5ffaa4ad9d7ec66da88a228f96826aafb9\" id:\"1e7c82100f09222c70b04da0fd235f5ffaa4ad9d7ec66da88a228f96826aafb9\" pid:3459 exited_at:{seconds:1747354479 nanos:273711817}" May 16 00:14:39.274587 containerd[1508]: time="2025-05-16T00:14:39.274252737Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1e7c82100f09222c70b04da0fd235f5ffaa4ad9d7ec66da88a228f96826aafb9\" id:\"1e7c82100f09222c70b04da0fd235f5ffaa4ad9d7ec66da88a228f96826aafb9\" pid:3459 exited_at:{seconds:1747354479 nanos:273711817}" May 16 00:14:39.299654 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1e7c82100f09222c70b04da0fd235f5ffaa4ad9d7ec66da88a228f96826aafb9-rootfs.mount: Deactivated successfully. May 16 00:14:40.149562 kubelet[2691]: E0516 00:14:40.149519 2691 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hz4bz" podUID="cda0ce1d-952c-4f48-bbf6-c0ac3e9334ff" May 16 00:14:40.358997 containerd[1508]: time="2025-05-16T00:14:40.358950688Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.0\"" May 16 00:14:41.126046 kubelet[2691]: I0516 00:14:41.125985 2691 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 16 00:14:41.126421 kubelet[2691]: E0516 00:14:41.126401 2691 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:14:41.359584 kubelet[2691]: E0516 00:14:41.359549 2691 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:14:42.149852 kubelet[2691]: E0516 00:14:42.149796 2691 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hz4bz" podUID="cda0ce1d-952c-4f48-bbf6-c0ac3e9334ff" May 16 00:14:44.149407 kubelet[2691]: E0516 00:14:44.149324 2691 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hz4bz" podUID="cda0ce1d-952c-4f48-bbf6-c0ac3e9334ff" May 16 00:14:45.095463 containerd[1508]: time="2025-05-16T00:14:45.095262278Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:14:45.101225 containerd[1508]: time="2025-05-16T00:14:45.101176292Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.0: active requests=0, bytes read=70300568" May 16 00:14:45.116848 containerd[1508]: time="2025-05-16T00:14:45.116808714Z" level=info msg="ImageCreate event name:\"sha256:15f996c472622f23047ea38b2d72940e8c34d0996b8a2e12a1f255c1d7083185\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:14:45.125712 containerd[1508]: time="2025-05-16T00:14:45.125681466Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:3dd06656abdc03fbd51782d5f6fe4d70e6825a1c0c5bce2a165bbd2ff9e0f7df\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:14:45.126247 containerd[1508]: time="2025-05-16T00:14:45.126215008Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.0\" with image id \"sha256:15f996c472622f23047ea38b2d72940e8c34d0996b8a2e12a1f255c1d7083185\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:3dd06656abdc03fbd51782d5f6fe4d70e6825a1c0c5bce2a165bbd2ff9e0f7df\", size \"71793271\" in 4.767224089s" May 16 00:14:45.126332 containerd[1508]: time="2025-05-16T00:14:45.126249517Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.0\" returns image reference \"sha256:15f996c472622f23047ea38b2d72940e8c34d0996b8a2e12a1f255c1d7083185\"" May 16 00:14:45.146317 containerd[1508]: time="2025-05-16T00:14:45.146267324Z" level=info msg="CreateContainer within sandbox \"bf0a011c45417fac4115ac65af9a950a36da1340e03038fc3b6268abe4d7a98c\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 16 00:14:45.199512 containerd[1508]: time="2025-05-16T00:14:45.199456223Z" level=info msg="Container 0ca06256759c34d5d3998a5c8e2051643b3aecdd0cfe57e78f4aa430e3dad307: CDI devices from CRI Config.CDIDevices: []" May 16 00:14:45.295105 containerd[1508]: time="2025-05-16T00:14:45.295052336Z" level=info msg="CreateContainer within sandbox \"bf0a011c45417fac4115ac65af9a950a36da1340e03038fc3b6268abe4d7a98c\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"0ca06256759c34d5d3998a5c8e2051643b3aecdd0cfe57e78f4aa430e3dad307\"" May 16 00:14:45.299143 containerd[1508]: time="2025-05-16T00:14:45.299080012Z" level=info msg="StartContainer for \"0ca06256759c34d5d3998a5c8e2051643b3aecdd0cfe57e78f4aa430e3dad307\"" May 16 00:14:45.301018 containerd[1508]: time="2025-05-16T00:14:45.300954534Z" level=info msg="connecting to shim 0ca06256759c34d5d3998a5c8e2051643b3aecdd0cfe57e78f4aa430e3dad307" address="unix:///run/containerd/s/f393a9d25e079fd071fa7e96c15e85946251eeb9c3108b2ec4a95ab5f72b4dbf" protocol=ttrpc version=3 May 16 00:14:45.322535 systemd[1]: Started cri-containerd-0ca06256759c34d5d3998a5c8e2051643b3aecdd0cfe57e78f4aa430e3dad307.scope - libcontainer container 0ca06256759c34d5d3998a5c8e2051643b3aecdd0cfe57e78f4aa430e3dad307. May 16 00:14:45.393228 containerd[1508]: time="2025-05-16T00:14:45.393035331Z" level=info msg="StartContainer for \"0ca06256759c34d5d3998a5c8e2051643b3aecdd0cfe57e78f4aa430e3dad307\" returns successfully" May 16 00:14:46.780575 kubelet[2691]: E0516 00:14:46.780518 2691 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hz4bz" podUID="cda0ce1d-952c-4f48-bbf6-c0ac3e9334ff" May 16 00:14:48.163040 containerd[1508]: time="2025-05-16T00:14:48.162982712Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 16 00:14:48.166069 systemd[1]: cri-containerd-0ca06256759c34d5d3998a5c8e2051643b3aecdd0cfe57e78f4aa430e3dad307.scope: Deactivated successfully. May 16 00:14:48.166437 systemd[1]: cri-containerd-0ca06256759c34d5d3998a5c8e2051643b3aecdd0cfe57e78f4aa430e3dad307.scope: Consumed 629ms CPU time, 181.1M memory peak, 3.6M read from disk, 170.9M written to disk. May 16 00:14:48.167300 containerd[1508]: time="2025-05-16T00:14:48.167277699Z" level=info msg="received exit event container_id:\"0ca06256759c34d5d3998a5c8e2051643b3aecdd0cfe57e78f4aa430e3dad307\" id:\"0ca06256759c34d5d3998a5c8e2051643b3aecdd0cfe57e78f4aa430e3dad307\" pid:3519 exited_at:{seconds:1747354488 nanos:167092356}" May 16 00:14:48.167560 containerd[1508]: time="2025-05-16T00:14:48.167540391Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0ca06256759c34d5d3998a5c8e2051643b3aecdd0cfe57e78f4aa430e3dad307\" id:\"0ca06256759c34d5d3998a5c8e2051643b3aecdd0cfe57e78f4aa430e3dad307\" pid:3519 exited_at:{seconds:1747354488 nanos:167092356}" May 16 00:14:48.187893 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0ca06256759c34d5d3998a5c8e2051643b3aecdd0cfe57e78f4aa430e3dad307-rootfs.mount: Deactivated successfully. May 16 00:14:48.219678 kubelet[2691]: I0516 00:14:48.218874 2691 kubelet_node_status.go:501] "Fast updating node status as it just became ready" May 16 00:14:48.491304 kubelet[2691]: I0516 00:14:48.491241 2691 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6715f7be-e888-4ac0-8c1b-fde56f478f63-config-volume\") pod \"coredns-668d6bf9bc-pm6bh\" (UID: \"6715f7be-e888-4ac0-8c1b-fde56f478f63\") " pod="kube-system/coredns-668d6bf9bc-pm6bh" May 16 00:14:48.491304 kubelet[2691]: I0516 00:14:48.491286 2691 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8xkxw\" (UniqueName: \"kubernetes.io/projected/6715f7be-e888-4ac0-8c1b-fde56f478f63-kube-api-access-8xkxw\") pod \"coredns-668d6bf9bc-pm6bh\" (UID: \"6715f7be-e888-4ac0-8c1b-fde56f478f63\") " pod="kube-system/coredns-668d6bf9bc-pm6bh" May 16 00:14:48.642107 systemd[1]: Created slice kubepods-burstable-pod6715f7be_e888_4ac0_8c1b_fde56f478f63.slice - libcontainer container kubepods-burstable-pod6715f7be_e888_4ac0_8c1b_fde56f478f63.slice. May 16 00:14:48.669884 systemd[1]: Created slice kubepods-besteffort-pod0d77b5c7_302f_48ea_88fb_1eaa21477648.slice - libcontainer container kubepods-besteffort-pod0d77b5c7_302f_48ea_88fb_1eaa21477648.slice. May 16 00:14:48.675673 systemd[1]: Created slice kubepods-besteffort-pod2cb50f28_04bb_4f6a_ac09_21e960ec8f86.slice - libcontainer container kubepods-besteffort-pod2cb50f28_04bb_4f6a_ac09_21e960ec8f86.slice. May 16 00:14:48.680157 systemd[1]: Created slice kubepods-burstable-pod5e130f73_1c5a_4996_b6b9_e20047051b8e.slice - libcontainer container kubepods-burstable-pod5e130f73_1c5a_4996_b6b9_e20047051b8e.slice. May 16 00:14:48.684801 systemd[1]: Created slice kubepods-besteffort-pod65274ddd_5bfc_4ed3_a962_8824ef50bc83.slice - libcontainer container kubepods-besteffort-pod65274ddd_5bfc_4ed3_a962_8824ef50bc83.slice. May 16 00:14:48.689295 systemd[1]: Created slice kubepods-besteffort-podb11c09e5_3cbb_468c_abbe_dc156fab2b5c.slice - libcontainer container kubepods-besteffort-podb11c09e5_3cbb_468c_abbe_dc156fab2b5c.slice. May 16 00:14:48.691858 kubelet[2691]: I0516 00:14:48.691826 2691 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b11c09e5-3cbb-468c-abbe-dc156fab2b5c-config\") pod \"goldmane-78d55f7ddc-d79sc\" (UID: \"b11c09e5-3cbb-468c-abbe-dc156fab2b5c\") " pod="calico-system/goldmane-78d55f7ddc-d79sc" May 16 00:14:48.691987 kubelet[2691]: I0516 00:14:48.691939 2691 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bs2xm\" (UniqueName: \"kubernetes.io/projected/0d77b5c7-302f-48ea-88fb-1eaa21477648-kube-api-access-bs2xm\") pod \"calico-apiserver-67fc6f48b5-qdbqj\" (UID: \"0d77b5c7-302f-48ea-88fb-1eaa21477648\") " pod="calico-apiserver/calico-apiserver-67fc6f48b5-qdbqj" May 16 00:14:48.691987 kubelet[2691]: I0516 00:14:48.691982 2691 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmgtc\" (UniqueName: \"kubernetes.io/projected/5e130f73-1c5a-4996-b6b9-e20047051b8e-kube-api-access-qmgtc\") pod \"coredns-668d6bf9bc-7zxch\" (UID: \"5e130f73-1c5a-4996-b6b9-e20047051b8e\") " pod="kube-system/coredns-668d6bf9bc-7zxch" May 16 00:14:48.692091 kubelet[2691]: I0516 00:14:48.692001 2691 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/65274ddd-5bfc-4ed3-a962-8824ef50bc83-whisker-ca-bundle\") pod \"whisker-548dd86c66-prftl\" (UID: \"65274ddd-5bfc-4ed3-a962-8824ef50bc83\") " pod="calico-system/whisker-548dd86c66-prftl" May 16 00:14:48.692091 kubelet[2691]: I0516 00:14:48.692017 2691 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/0d77b5c7-302f-48ea-88fb-1eaa21477648-calico-apiserver-certs\") pod \"calico-apiserver-67fc6f48b5-qdbqj\" (UID: \"0d77b5c7-302f-48ea-88fb-1eaa21477648\") " pod="calico-apiserver/calico-apiserver-67fc6f48b5-qdbqj" May 16 00:14:48.692147 kubelet[2691]: I0516 00:14:48.692093 2691 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5e130f73-1c5a-4996-b6b9-e20047051b8e-config-volume\") pod \"coredns-668d6bf9bc-7zxch\" (UID: \"5e130f73-1c5a-4996-b6b9-e20047051b8e\") " pod="kube-system/coredns-668d6bf9bc-7zxch" May 16 00:14:48.692236 kubelet[2691]: I0516 00:14:48.692210 2691 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/65274ddd-5bfc-4ed3-a962-8824ef50bc83-whisker-backend-key-pair\") pod \"whisker-548dd86c66-prftl\" (UID: \"65274ddd-5bfc-4ed3-a962-8824ef50bc83\") " pod="calico-system/whisker-548dd86c66-prftl" May 16 00:14:48.692236 kubelet[2691]: I0516 00:14:48.692237 2691 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kzn24\" (UniqueName: \"kubernetes.io/projected/65274ddd-5bfc-4ed3-a962-8824ef50bc83-kube-api-access-kzn24\") pod \"whisker-548dd86c66-prftl\" (UID: \"65274ddd-5bfc-4ed3-a962-8824ef50bc83\") " pod="calico-system/whisker-548dd86c66-prftl" May 16 00:14:48.692439 kubelet[2691]: I0516 00:14:48.692256 2691 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/b11c09e5-3cbb-468c-abbe-dc156fab2b5c-goldmane-key-pair\") pod \"goldmane-78d55f7ddc-d79sc\" (UID: \"b11c09e5-3cbb-468c-abbe-dc156fab2b5c\") " pod="calico-system/goldmane-78d55f7ddc-d79sc" May 16 00:14:48.692439 kubelet[2691]: I0516 00:14:48.692275 2691 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2cb50f28-04bb-4f6a-ac09-21e960ec8f86-tigera-ca-bundle\") pod \"calico-kube-controllers-847f5796b-ssmkv\" (UID: \"2cb50f28-04bb-4f6a-ac09-21e960ec8f86\") " pod="calico-system/calico-kube-controllers-847f5796b-ssmkv" May 16 00:14:48.692439 kubelet[2691]: I0516 00:14:48.692290 2691 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b11c09e5-3cbb-468c-abbe-dc156fab2b5c-goldmane-ca-bundle\") pod \"goldmane-78d55f7ddc-d79sc\" (UID: \"b11c09e5-3cbb-468c-abbe-dc156fab2b5c\") " pod="calico-system/goldmane-78d55f7ddc-d79sc" May 16 00:14:48.692439 kubelet[2691]: I0516 00:14:48.692305 2691 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98m74\" (UniqueName: \"kubernetes.io/projected/2cb50f28-04bb-4f6a-ac09-21e960ec8f86-kube-api-access-98m74\") pod \"calico-kube-controllers-847f5796b-ssmkv\" (UID: \"2cb50f28-04bb-4f6a-ac09-21e960ec8f86\") " pod="calico-system/calico-kube-controllers-847f5796b-ssmkv" May 16 00:14:48.692439 kubelet[2691]: I0516 00:14:48.692326 2691 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4rhrd\" (UniqueName: \"kubernetes.io/projected/b11c09e5-3cbb-468c-abbe-dc156fab2b5c-kube-api-access-4rhrd\") pod \"goldmane-78d55f7ddc-d79sc\" (UID: \"b11c09e5-3cbb-468c-abbe-dc156fab2b5c\") " pod="calico-system/goldmane-78d55f7ddc-d79sc" May 16 00:14:48.694068 systemd[1]: Created slice kubepods-besteffort-pod25e056e5_f7d2_462c_bd3a_fd78b9601b08.slice - libcontainer container kubepods-besteffort-pod25e056e5_f7d2_462c_bd3a_fd78b9601b08.slice. May 16 00:14:48.793236 kubelet[2691]: I0516 00:14:48.793093 2691 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbnfq\" (UniqueName: \"kubernetes.io/projected/25e056e5-f7d2-462c-bd3a-fd78b9601b08-kube-api-access-nbnfq\") pod \"calico-apiserver-67fc6f48b5-rr84h\" (UID: \"25e056e5-f7d2-462c-bd3a-fd78b9601b08\") " pod="calico-apiserver/calico-apiserver-67fc6f48b5-rr84h" May 16 00:14:48.793936 kubelet[2691]: I0516 00:14:48.793904 2691 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/25e056e5-f7d2-462c-bd3a-fd78b9601b08-calico-apiserver-certs\") pod \"calico-apiserver-67fc6f48b5-rr84h\" (UID: \"25e056e5-f7d2-462c-bd3a-fd78b9601b08\") " pod="calico-apiserver/calico-apiserver-67fc6f48b5-rr84h" May 16 00:14:49.154141 systemd[1]: Created slice kubepods-besteffort-podcda0ce1d_952c_4f48_bbf6_c0ac3e9334ff.slice - libcontainer container kubepods-besteffort-podcda0ce1d_952c_4f48_bbf6_c0ac3e9334ff.slice. May 16 00:14:49.156789 containerd[1508]: time="2025-05-16T00:14:49.156746931Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hz4bz,Uid:cda0ce1d-952c-4f48-bbf6-c0ac3e9334ff,Namespace:calico-system,Attempt:0,}" May 16 00:14:49.246377 kubelet[2691]: E0516 00:14:49.246335 2691 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:14:49.246995 containerd[1508]: time="2025-05-16T00:14:49.246958446Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-pm6bh,Uid:6715f7be-e888-4ac0-8c1b-fde56f478f63,Namespace:kube-system,Attempt:0,}" May 16 00:14:49.274436 containerd[1508]: time="2025-05-16T00:14:49.273602521Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67fc6f48b5-qdbqj,Uid:0d77b5c7-302f-48ea-88fb-1eaa21477648,Namespace:calico-apiserver,Attempt:0,}" May 16 00:14:49.278672 containerd[1508]: time="2025-05-16T00:14:49.278640770Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-847f5796b-ssmkv,Uid:2cb50f28-04bb-4f6a-ac09-21e960ec8f86,Namespace:calico-system,Attempt:0,}" May 16 00:14:49.282978 kubelet[2691]: E0516 00:14:49.282951 2691 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:14:49.283439 containerd[1508]: time="2025-05-16T00:14:49.283401308Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-7zxch,Uid:5e130f73-1c5a-4996-b6b9-e20047051b8e,Namespace:kube-system,Attempt:0,}" May 16 00:14:49.287280 containerd[1508]: time="2025-05-16T00:14:49.287232382Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-548dd86c66-prftl,Uid:65274ddd-5bfc-4ed3-a962-8824ef50bc83,Namespace:calico-system,Attempt:0,}" May 16 00:14:49.292272 containerd[1508]: time="2025-05-16T00:14:49.292232402Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-78d55f7ddc-d79sc,Uid:b11c09e5-3cbb-468c-abbe-dc156fab2b5c,Namespace:calico-system,Attempt:0,}" May 16 00:14:49.297128 containerd[1508]: time="2025-05-16T00:14:49.297086068Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67fc6f48b5-rr84h,Uid:25e056e5-f7d2-462c-bd3a-fd78b9601b08,Namespace:calico-apiserver,Attempt:0,}" May 16 00:14:49.614356 containerd[1508]: time="2025-05-16T00:14:49.614132971Z" level=error msg="Failed to destroy network for sandbox \"ffa5ac9cc0512eca1e4d59a52eb26d475f23f91ff00b520debcff65840095150\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:14:49.635712 containerd[1508]: time="2025-05-16T00:14:49.635630160Z" level=error msg="Failed to destroy network for sandbox \"2f3f311381ef24c3299cea0df4f80173cd6f1dbf361f97d7abc4610d29bb94d7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:14:49.704987 systemd[1]: Started sshd@9-10.0.0.81:22-10.0.0.1:50840.service - OpenSSH per-connection server daemon (10.0.0.1:50840). May 16 00:14:49.712537 containerd[1508]: time="2025-05-16T00:14:49.712250903Z" level=error msg="Failed to destroy network for sandbox \"989b572c9eef1d75bea8c229f8a88bc999b6be5b8ec4a07eaab2ab1b596259ce\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:14:49.764159 containerd[1508]: time="2025-05-16T00:14:49.764102312Z" level=error msg="Failed to destroy network for sandbox \"8cc89d0ad9a97c18b125112a9d179dc6995d57dce641c9085e98406ccd3ceba3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:14:49.802380 containerd[1508]: time="2025-05-16T00:14:49.802320503Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.0\"" May 16 00:14:49.812164 containerd[1508]: time="2025-05-16T00:14:49.812109984Z" level=error msg="Failed to destroy network for sandbox \"170e97b5fb6b983d6cb97b687f3bdd5a026791a87e8f2e47785bc755f083a162\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:14:49.817756 sshd[3658]: Accepted publickey for core from 10.0.0.1 port 50840 ssh2: RSA SHA256:VClJ7K4P6c9/rg+rJGjy/BA6Vatbs9UvxKJcX8Q7ZgI May 16 00:14:49.819890 sshd-session[3658]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:14:49.827002 systemd-logind[1490]: New session 10 of user core. May 16 00:14:49.831564 systemd[1]: Started session-10.scope - Session 10 of User core. May 16 00:14:49.856179 containerd[1508]: time="2025-05-16T00:14:49.856127846Z" level=error msg="Failed to destroy network for sandbox \"38553fd76115cbe3cd86df89df34701eb2089f23fc9b4032d504816ddc1468dd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:14:49.902730 containerd[1508]: time="2025-05-16T00:14:49.902592998Z" level=error msg="Failed to destroy network for sandbox \"7f24f71ce748d4143e84b876314da1f9529e835c0abd87a893bf78eb4c3eb1c8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:14:49.927788 containerd[1508]: time="2025-05-16T00:14:49.927705342Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hz4bz,Uid:cda0ce1d-952c-4f48-bbf6-c0ac3e9334ff,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ffa5ac9cc0512eca1e4d59a52eb26d475f23f91ff00b520debcff65840095150\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:14:49.929380 kubelet[2691]: E0516 00:14:49.928066 2691 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ffa5ac9cc0512eca1e4d59a52eb26d475f23f91ff00b520debcff65840095150\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:14:49.929380 kubelet[2691]: E0516 00:14:49.928157 2691 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ffa5ac9cc0512eca1e4d59a52eb26d475f23f91ff00b520debcff65840095150\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hz4bz" May 16 00:14:49.929380 kubelet[2691]: E0516 00:14:49.928189 2691 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ffa5ac9cc0512eca1e4d59a52eb26d475f23f91ff00b520debcff65840095150\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hz4bz" May 16 00:14:49.929611 kubelet[2691]: E0516 00:14:49.928230 2691 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-hz4bz_calico-system(cda0ce1d-952c-4f48-bbf6-c0ac3e9334ff)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-hz4bz_calico-system(cda0ce1d-952c-4f48-bbf6-c0ac3e9334ff)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ffa5ac9cc0512eca1e4d59a52eb26d475f23f91ff00b520debcff65840095150\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-hz4bz" podUID="cda0ce1d-952c-4f48-bbf6-c0ac3e9334ff" May 16 00:14:49.968406 containerd[1508]: time="2025-05-16T00:14:49.968334087Z" level=error msg="Failed to destroy network for sandbox \"098fe213a7cd0d658442fa4d1d02186b0ea5b9902096a05bae69b85e8d64569f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:14:50.001157 containerd[1508]: time="2025-05-16T00:14:50.001105255Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-pm6bh,Uid:6715f7be-e888-4ac0-8c1b-fde56f478f63,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2f3f311381ef24c3299cea0df4f80173cd6f1dbf361f97d7abc4610d29bb94d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:14:50.001379 kubelet[2691]: E0516 00:14:50.001311 2691 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2f3f311381ef24c3299cea0df4f80173cd6f1dbf361f97d7abc4610d29bb94d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:14:50.001379 kubelet[2691]: E0516 00:14:50.001375 2691 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2f3f311381ef24c3299cea0df4f80173cd6f1dbf361f97d7abc4610d29bb94d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-pm6bh" May 16 00:14:50.001506 kubelet[2691]: E0516 00:14:50.001400 2691 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2f3f311381ef24c3299cea0df4f80173cd6f1dbf361f97d7abc4610d29bb94d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-pm6bh" May 16 00:14:50.001506 kubelet[2691]: E0516 00:14:50.001444 2691 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-pm6bh_kube-system(6715f7be-e888-4ac0-8c1b-fde56f478f63)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-pm6bh_kube-system(6715f7be-e888-4ac0-8c1b-fde56f478f63)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2f3f311381ef24c3299cea0df4f80173cd6f1dbf361f97d7abc4610d29bb94d7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-pm6bh" podUID="6715f7be-e888-4ac0-8c1b-fde56f478f63" May 16 00:14:50.010267 containerd[1508]: time="2025-05-16T00:14:50.010202436Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67fc6f48b5-qdbqj,Uid:0d77b5c7-302f-48ea-88fb-1eaa21477648,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"989b572c9eef1d75bea8c229f8a88bc999b6be5b8ec4a07eaab2ab1b596259ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:14:50.010372 kubelet[2691]: E0516 00:14:50.010330 2691 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"989b572c9eef1d75bea8c229f8a88bc999b6be5b8ec4a07eaab2ab1b596259ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:14:50.010428 kubelet[2691]: E0516 00:14:50.010376 2691 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"989b572c9eef1d75bea8c229f8a88bc999b6be5b8ec4a07eaab2ab1b596259ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-67fc6f48b5-qdbqj" May 16 00:14:50.010428 kubelet[2691]: E0516 00:14:50.010405 2691 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"989b572c9eef1d75bea8c229f8a88bc999b6be5b8ec4a07eaab2ab1b596259ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-67fc6f48b5-qdbqj" May 16 00:14:50.010481 kubelet[2691]: E0516 00:14:50.010436 2691 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-67fc6f48b5-qdbqj_calico-apiserver(0d77b5c7-302f-48ea-88fb-1eaa21477648)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-67fc6f48b5-qdbqj_calico-apiserver(0d77b5c7-302f-48ea-88fb-1eaa21477648)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"989b572c9eef1d75bea8c229f8a88bc999b6be5b8ec4a07eaab2ab1b596259ce\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-67fc6f48b5-qdbqj" podUID="0d77b5c7-302f-48ea-88fb-1eaa21477648" May 16 00:14:50.032924 containerd[1508]: time="2025-05-16T00:14:50.032883216Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-847f5796b-ssmkv,Uid:2cb50f28-04bb-4f6a-ac09-21e960ec8f86,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8cc89d0ad9a97c18b125112a9d179dc6995d57dce641c9085e98406ccd3ceba3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:14:50.033093 kubelet[2691]: E0516 00:14:50.032998 2691 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8cc89d0ad9a97c18b125112a9d179dc6995d57dce641c9085e98406ccd3ceba3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:14:50.033093 kubelet[2691]: E0516 00:14:50.033074 2691 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8cc89d0ad9a97c18b125112a9d179dc6995d57dce641c9085e98406ccd3ceba3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-847f5796b-ssmkv" May 16 00:14:50.033184 kubelet[2691]: E0516 00:14:50.033092 2691 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8cc89d0ad9a97c18b125112a9d179dc6995d57dce641c9085e98406ccd3ceba3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-847f5796b-ssmkv" May 16 00:14:50.033184 kubelet[2691]: E0516 00:14:50.033124 2691 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-847f5796b-ssmkv_calico-system(2cb50f28-04bb-4f6a-ac09-21e960ec8f86)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-847f5796b-ssmkv_calico-system(2cb50f28-04bb-4f6a-ac09-21e960ec8f86)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8cc89d0ad9a97c18b125112a9d179dc6995d57dce641c9085e98406ccd3ceba3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-847f5796b-ssmkv" podUID="2cb50f28-04bb-4f6a-ac09-21e960ec8f86" May 16 00:14:50.040901 sshd[3756]: Connection closed by 10.0.0.1 port 50840 May 16 00:14:50.041252 sshd-session[3658]: pam_unix(sshd:session): session closed for user core May 16 00:14:50.045833 systemd[1]: sshd@9-10.0.0.81:22-10.0.0.1:50840.service: Deactivated successfully. May 16 00:14:50.048269 systemd[1]: session-10.scope: Deactivated successfully. May 16 00:14:50.049113 systemd-logind[1490]: Session 10 logged out. Waiting for processes to exit. May 16 00:14:50.050206 systemd-logind[1490]: Removed session 10. May 16 00:14:50.071554 containerd[1508]: time="2025-05-16T00:14:50.071477435Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-7zxch,Uid:5e130f73-1c5a-4996-b6b9-e20047051b8e,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"170e97b5fb6b983d6cb97b687f3bdd5a026791a87e8f2e47785bc755f083a162\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:14:50.071749 kubelet[2691]: E0516 00:14:50.071711 2691 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"170e97b5fb6b983d6cb97b687f3bdd5a026791a87e8f2e47785bc755f083a162\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:14:50.071833 kubelet[2691]: E0516 00:14:50.071767 2691 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"170e97b5fb6b983d6cb97b687f3bdd5a026791a87e8f2e47785bc755f083a162\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-7zxch" May 16 00:14:50.071833 kubelet[2691]: E0516 00:14:50.071786 2691 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"170e97b5fb6b983d6cb97b687f3bdd5a026791a87e8f2e47785bc755f083a162\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-7zxch" May 16 00:14:50.071903 kubelet[2691]: E0516 00:14:50.071834 2691 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-7zxch_kube-system(5e130f73-1c5a-4996-b6b9-e20047051b8e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-7zxch_kube-system(5e130f73-1c5a-4996-b6b9-e20047051b8e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"170e97b5fb6b983d6cb97b687f3bdd5a026791a87e8f2e47785bc755f083a162\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-7zxch" podUID="5e130f73-1c5a-4996-b6b9-e20047051b8e" May 16 00:14:50.088914 containerd[1508]: time="2025-05-16T00:14:50.088843700Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-548dd86c66-prftl,Uid:65274ddd-5bfc-4ed3-a962-8824ef50bc83,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"38553fd76115cbe3cd86df89df34701eb2089f23fc9b4032d504816ddc1468dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:14:50.089105 kubelet[2691]: E0516 00:14:50.089028 2691 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"38553fd76115cbe3cd86df89df34701eb2089f23fc9b4032d504816ddc1468dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:14:50.089105 kubelet[2691]: E0516 00:14:50.089074 2691 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"38553fd76115cbe3cd86df89df34701eb2089f23fc9b4032d504816ddc1468dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-548dd86c66-prftl" May 16 00:14:50.089105 kubelet[2691]: E0516 00:14:50.089094 2691 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"38553fd76115cbe3cd86df89df34701eb2089f23fc9b4032d504816ddc1468dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-548dd86c66-prftl" May 16 00:14:50.089264 kubelet[2691]: E0516 00:14:50.089125 2691 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-548dd86c66-prftl_calico-system(65274ddd-5bfc-4ed3-a962-8824ef50bc83)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-548dd86c66-prftl_calico-system(65274ddd-5bfc-4ed3-a962-8824ef50bc83)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"38553fd76115cbe3cd86df89df34701eb2089f23fc9b4032d504816ddc1468dd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-548dd86c66-prftl" podUID="65274ddd-5bfc-4ed3-a962-8824ef50bc83" May 16 00:14:50.118314 containerd[1508]: time="2025-05-16T00:14:50.118243162Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-78d55f7ddc-d79sc,Uid:b11c09e5-3cbb-468c-abbe-dc156fab2b5c,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7f24f71ce748d4143e84b876314da1f9529e835c0abd87a893bf78eb4c3eb1c8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:14:50.118522 kubelet[2691]: E0516 00:14:50.118464 2691 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7f24f71ce748d4143e84b876314da1f9529e835c0abd87a893bf78eb4c3eb1c8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:14:50.118522 kubelet[2691]: E0516 00:14:50.118515 2691 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7f24f71ce748d4143e84b876314da1f9529e835c0abd87a893bf78eb4c3eb1c8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-78d55f7ddc-d79sc" May 16 00:14:50.118614 kubelet[2691]: E0516 00:14:50.118533 2691 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7f24f71ce748d4143e84b876314da1f9529e835c0abd87a893bf78eb4c3eb1c8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-78d55f7ddc-d79sc" May 16 00:14:50.118614 kubelet[2691]: E0516 00:14:50.118570 2691 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-78d55f7ddc-d79sc_calico-system(b11c09e5-3cbb-468c-abbe-dc156fab2b5c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-78d55f7ddc-d79sc_calico-system(b11c09e5-3cbb-468c-abbe-dc156fab2b5c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7f24f71ce748d4143e84b876314da1f9529e835c0abd87a893bf78eb4c3eb1c8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-78d55f7ddc-d79sc" podUID="b11c09e5-3cbb-468c-abbe-dc156fab2b5c" May 16 00:14:50.134380 containerd[1508]: time="2025-05-16T00:14:50.134306472Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67fc6f48b5-rr84h,Uid:25e056e5-f7d2-462c-bd3a-fd78b9601b08,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"098fe213a7cd0d658442fa4d1d02186b0ea5b9902096a05bae69b85e8d64569f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:14:50.134527 kubelet[2691]: E0516 00:14:50.134484 2691 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"098fe213a7cd0d658442fa4d1d02186b0ea5b9902096a05bae69b85e8d64569f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:14:50.134586 kubelet[2691]: E0516 00:14:50.134536 2691 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"098fe213a7cd0d658442fa4d1d02186b0ea5b9902096a05bae69b85e8d64569f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-67fc6f48b5-rr84h" May 16 00:14:50.134586 kubelet[2691]: E0516 00:14:50.134550 2691 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"098fe213a7cd0d658442fa4d1d02186b0ea5b9902096a05bae69b85e8d64569f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-67fc6f48b5-rr84h" May 16 00:14:50.134641 kubelet[2691]: E0516 00:14:50.134580 2691 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-67fc6f48b5-rr84h_calico-apiserver(25e056e5-f7d2-462c-bd3a-fd78b9601b08)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-67fc6f48b5-rr84h_calico-apiserver(25e056e5-f7d2-462c-bd3a-fd78b9601b08)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"098fe213a7cd0d658442fa4d1d02186b0ea5b9902096a05bae69b85e8d64569f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-67fc6f48b5-rr84h" podUID="25e056e5-f7d2-462c-bd3a-fd78b9601b08" May 16 00:14:50.190961 systemd[1]: run-netns-cni\x2d39ff9bab\x2d8326\x2d72ad\x2d0b1a\x2dba898378ce6b.mount: Deactivated successfully. May 16 00:14:50.193636 systemd[1]: run-netns-cni\x2d8c9d202a\x2d6cdb\x2da60e\x2d4c90\x2da0d14488c10a.mount: Deactivated successfully. May 16 00:14:50.193816 systemd[1]: run-netns-cni\x2d9eac7d9d\x2de049\x2de534\x2d598a\x2d17b40a72dc6b.mount: Deactivated successfully. May 16 00:14:50.193965 systemd[1]: run-netns-cni\x2ddb905270\x2de4c9\x2d2993\x2d3c30\x2d8cdb603558ff.mount: Deactivated successfully. May 16 00:14:55.053775 systemd[1]: Started sshd@10-10.0.0.81:22-10.0.0.1:50846.service - OpenSSH per-connection server daemon (10.0.0.1:50846). May 16 00:14:55.112514 sshd[3834]: Accepted publickey for core from 10.0.0.1 port 50846 ssh2: RSA SHA256:VClJ7K4P6c9/rg+rJGjy/BA6Vatbs9UvxKJcX8Q7ZgI May 16 00:14:55.114429 sshd-session[3834]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:14:55.119127 systemd-logind[1490]: New session 11 of user core. May 16 00:14:55.130624 systemd[1]: Started session-11.scope - Session 11 of User core. May 16 00:14:55.297572 sshd[3836]: Connection closed by 10.0.0.1 port 50846 May 16 00:14:55.297913 sshd-session[3834]: pam_unix(sshd:session): session closed for user core May 16 00:14:55.302081 systemd[1]: sshd@10-10.0.0.81:22-10.0.0.1:50846.service: Deactivated successfully. May 16 00:14:55.304193 systemd[1]: session-11.scope: Deactivated successfully. May 16 00:14:55.305119 systemd-logind[1490]: Session 11 logged out. Waiting for processes to exit. May 16 00:14:55.306074 systemd-logind[1490]: Removed session 11. May 16 00:15:00.230107 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount85211526.mount: Deactivated successfully. May 16 00:15:00.318273 systemd[1]: Started sshd@11-10.0.0.81:22-10.0.0.1:49200.service - OpenSSH per-connection server daemon (10.0.0.1:49200). May 16 00:15:01.150438 containerd[1508]: time="2025-05-16T00:15:01.150354779Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67fc6f48b5-qdbqj,Uid:0d77b5c7-302f-48ea-88fb-1eaa21477648,Namespace:calico-apiserver,Attempt:0,}" May 16 00:15:02.150831 kubelet[2691]: E0516 00:15:02.150738 2691 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:15:02.151724 containerd[1508]: time="2025-05-16T00:15:02.151140950Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-7zxch,Uid:5e130f73-1c5a-4996-b6b9-e20047051b8e,Namespace:kube-system,Attempt:0,}" May 16 00:15:02.151724 containerd[1508]: time="2025-05-16T00:15:02.151324752Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-548dd86c66-prftl,Uid:65274ddd-5bfc-4ed3-a962-8824ef50bc83,Namespace:calico-system,Attempt:0,}" May 16 00:15:02.151724 containerd[1508]: time="2025-05-16T00:15:02.151425048Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-847f5796b-ssmkv,Uid:2cb50f28-04bb-4f6a-ac09-21e960ec8f86,Namespace:calico-system,Attempt:0,}" May 16 00:15:02.151724 containerd[1508]: time="2025-05-16T00:15:02.151518671Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hz4bz,Uid:cda0ce1d-952c-4f48-bbf6-c0ac3e9334ff,Namespace:calico-system,Attempt:0,}" May 16 00:15:02.424069 sshd[3854]: Accepted publickey for core from 10.0.0.1 port 49200 ssh2: RSA SHA256:VClJ7K4P6c9/rg+rJGjy/BA6Vatbs9UvxKJcX8Q7ZgI May 16 00:15:02.435464 sshd-session[3854]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:15:02.488321 systemd-logind[1490]: New session 12 of user core. May 16 00:15:02.498751 systemd[1]: Started session-12.scope - Session 12 of User core. May 16 00:15:02.991838 sshd[3857]: Connection closed by 10.0.0.1 port 49200 May 16 00:15:02.993190 sshd-session[3854]: pam_unix(sshd:session): session closed for user core May 16 00:15:02.998547 systemd[1]: sshd@11-10.0.0.81:22-10.0.0.1:49200.service: Deactivated successfully. May 16 00:15:03.001330 systemd[1]: session-12.scope: Deactivated successfully. May 16 00:15:03.005378 systemd-logind[1490]: Session 12 logged out. Waiting for processes to exit. May 16 00:15:03.041070 systemd-logind[1490]: Removed session 12. May 16 00:15:03.046214 systemd[1]: Started sshd@12-10.0.0.81:22-10.0.0.1:49202.service - OpenSSH per-connection server daemon (10.0.0.1:49202). May 16 00:15:03.150583 containerd[1508]: time="2025-05-16T00:15:03.150260537Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-78d55f7ddc-d79sc,Uid:b11c09e5-3cbb-468c-abbe-dc156fab2b5c,Namespace:calico-system,Attempt:0,}" May 16 00:15:03.821620 sshd[3870]: Accepted publickey for core from 10.0.0.1 port 49202 ssh2: RSA SHA256:VClJ7K4P6c9/rg+rJGjy/BA6Vatbs9UvxKJcX8Q7ZgI May 16 00:15:03.824697 sshd-session[3870]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:15:03.843719 systemd-logind[1490]: New session 13 of user core. May 16 00:15:03.857707 systemd[1]: Started session-13.scope - Session 13 of User core. May 16 00:15:03.991271 containerd[1508]: time="2025-05-16T00:15:03.987601541Z" level=error msg="Failed to destroy network for sandbox \"a53b3efcdc2f03c22d2426fd031e35763ccc2291434c1f1941c216c7fda1100e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:15:03.993965 systemd[1]: run-netns-cni\x2d83a8decb\x2d8f2b\x2dc1c7\x2d7615\x2de978752bac2e.mount: Deactivated successfully. May 16 00:15:04.038563 containerd[1508]: time="2025-05-16T00:15:04.038310190Z" level=error msg="Failed to destroy network for sandbox \"483ce7efaf8fca1e3461485db610d48a659c49be4911a1bbc1fa44e26d1b8152\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:15:04.048393 systemd[1]: run-netns-cni\x2df1f6e31d\x2d066e\x2d20d8\x2d285f\x2dd37ffe6226c3.mount: Deactivated successfully. May 16 00:15:04.073109 containerd[1508]: time="2025-05-16T00:15:04.072953972Z" level=error msg="Failed to destroy network for sandbox \"0507daa0d6587892f5eb87f5f56aba462e1af60e2f460309a80c110041b79d47\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:15:04.076200 systemd[1]: run-netns-cni\x2d0167da6a\x2dd774\x2d553e\x2d2884\x2dab1da03fbbd8.mount: Deactivated successfully. May 16 00:15:04.155783 containerd[1508]: time="2025-05-16T00:15:04.155741724Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67fc6f48b5-rr84h,Uid:25e056e5-f7d2-462c-bd3a-fd78b9601b08,Namespace:calico-apiserver,Attempt:0,}" May 16 00:15:04.158862 containerd[1508]: time="2025-05-16T00:15:04.158753541Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:15:04.165539 containerd[1508]: time="2025-05-16T00:15:04.165460560Z" level=error msg="Failed to destroy network for sandbox \"39a49e0f4920b78e3903410306265eb9cf8094d129b1891cf86e991c51f6b73b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:15:04.168448 systemd[1]: run-netns-cni\x2d96485d75\x2df76e\x2d084f\x2d37cd\x2deb71cc5b0a5a.mount: Deactivated successfully. May 16 00:15:04.183562 sshd[3917]: Connection closed by 10.0.0.1 port 49202 May 16 00:15:04.182281 sshd-session[3870]: pam_unix(sshd:session): session closed for user core May 16 00:15:04.188176 containerd[1508]: time="2025-05-16T00:15:04.188122465Z" level=error msg="Failed to destroy network for sandbox \"e968735a5913d29db36cedc90db0e00c1eaf99c34efa28034505678fbbdce7d2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:15:04.196139 systemd[1]: run-netns-cni\x2d79cf28b6\x2d0ce0\x2dc01e\x2dfd40\x2d42bb4e71d217.mount: Deactivated successfully. May 16 00:15:04.197343 systemd[1]: sshd@12-10.0.0.81:22-10.0.0.1:49202.service: Deactivated successfully. May 16 00:15:04.199334 systemd[1]: session-13.scope: Deactivated successfully. May 16 00:15:04.201585 systemd-logind[1490]: Session 13 logged out. Waiting for processes to exit. May 16 00:15:04.203015 systemd[1]: Started sshd@13-10.0.0.81:22-10.0.0.1:49218.service - OpenSSH per-connection server daemon (10.0.0.1:49218). May 16 00:15:04.203914 systemd-logind[1490]: Removed session 13. May 16 00:15:04.287235 sshd[4046]: Accepted publickey for core from 10.0.0.1 port 49218 ssh2: RSA SHA256:VClJ7K4P6c9/rg+rJGjy/BA6Vatbs9UvxKJcX8Q7ZgI May 16 00:15:04.289259 sshd-session[4046]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:15:04.294573 systemd-logind[1490]: New session 14 of user core. May 16 00:15:04.301196 containerd[1508]: time="2025-05-16T00:15:04.301047155Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-548dd86c66-prftl,Uid:65274ddd-5bfc-4ed3-a962-8824ef50bc83,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a53b3efcdc2f03c22d2426fd031e35763ccc2291434c1f1941c216c7fda1100e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:15:04.301405 kubelet[2691]: E0516 00:15:04.301345 2691 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a53b3efcdc2f03c22d2426fd031e35763ccc2291434c1f1941c216c7fda1100e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:15:04.303636 kubelet[2691]: E0516 00:15:04.301443 2691 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a53b3efcdc2f03c22d2426fd031e35763ccc2291434c1f1941c216c7fda1100e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-548dd86c66-prftl" May 16 00:15:04.303636 kubelet[2691]: E0516 00:15:04.301466 2691 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a53b3efcdc2f03c22d2426fd031e35763ccc2291434c1f1941c216c7fda1100e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-548dd86c66-prftl" May 16 00:15:04.303636 kubelet[2691]: E0516 00:15:04.301513 2691 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-548dd86c66-prftl_calico-system(65274ddd-5bfc-4ed3-a962-8824ef50bc83)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-548dd86c66-prftl_calico-system(65274ddd-5bfc-4ed3-a962-8824ef50bc83)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a53b3efcdc2f03c22d2426fd031e35763ccc2291434c1f1941c216c7fda1100e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-548dd86c66-prftl" podUID="65274ddd-5bfc-4ed3-a962-8824ef50bc83" May 16 00:15:04.303520 systemd[1]: Started session-14.scope - Session 14 of User core. May 16 00:15:04.335927 containerd[1508]: time="2025-05-16T00:15:04.335728717Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-7zxch,Uid:5e130f73-1c5a-4996-b6b9-e20047051b8e,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"483ce7efaf8fca1e3461485db610d48a659c49be4911a1bbc1fa44e26d1b8152\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:15:04.336126 kubelet[2691]: E0516 00:15:04.336058 2691 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"483ce7efaf8fca1e3461485db610d48a659c49be4911a1bbc1fa44e26d1b8152\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:15:04.336185 kubelet[2691]: E0516 00:15:04.336139 2691 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"483ce7efaf8fca1e3461485db610d48a659c49be4911a1bbc1fa44e26d1b8152\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-7zxch" May 16 00:15:04.336185 kubelet[2691]: E0516 00:15:04.336170 2691 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"483ce7efaf8fca1e3461485db610d48a659c49be4911a1bbc1fa44e26d1b8152\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-7zxch" May 16 00:15:04.336277 kubelet[2691]: E0516 00:15:04.336237 2691 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-7zxch_kube-system(5e130f73-1c5a-4996-b6b9-e20047051b8e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-7zxch_kube-system(5e130f73-1c5a-4996-b6b9-e20047051b8e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"483ce7efaf8fca1e3461485db610d48a659c49be4911a1bbc1fa44e26d1b8152\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-7zxch" podUID="5e130f73-1c5a-4996-b6b9-e20047051b8e" May 16 00:15:04.382134 containerd[1508]: time="2025-05-16T00:15:04.377405826Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67fc6f48b5-qdbqj,Uid:0d77b5c7-302f-48ea-88fb-1eaa21477648,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0507daa0d6587892f5eb87f5f56aba462e1af60e2f460309a80c110041b79d47\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:15:04.384373 kubelet[2691]: E0516 00:15:04.382766 2691 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0507daa0d6587892f5eb87f5f56aba462e1af60e2f460309a80c110041b79d47\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:15:04.384373 kubelet[2691]: E0516 00:15:04.383210 2691 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0507daa0d6587892f5eb87f5f56aba462e1af60e2f460309a80c110041b79d47\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-67fc6f48b5-qdbqj" May 16 00:15:04.384373 kubelet[2691]: E0516 00:15:04.383303 2691 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0507daa0d6587892f5eb87f5f56aba462e1af60e2f460309a80c110041b79d47\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-67fc6f48b5-qdbqj" May 16 00:15:04.384539 kubelet[2691]: E0516 00:15:04.383932 2691 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-67fc6f48b5-qdbqj_calico-apiserver(0d77b5c7-302f-48ea-88fb-1eaa21477648)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-67fc6f48b5-qdbqj_calico-apiserver(0d77b5c7-302f-48ea-88fb-1eaa21477648)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0507daa0d6587892f5eb87f5f56aba462e1af60e2f460309a80c110041b79d47\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-67fc6f48b5-qdbqj" podUID="0d77b5c7-302f-48ea-88fb-1eaa21477648" May 16 00:15:04.505834 containerd[1508]: time="2025-05-16T00:15:04.505607558Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.0: active requests=0, bytes read=156396372" May 16 00:15:04.544669 containerd[1508]: time="2025-05-16T00:15:04.544555097Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-847f5796b-ssmkv,Uid:2cb50f28-04bb-4f6a-ac09-21e960ec8f86,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"39a49e0f4920b78e3903410306265eb9cf8094d129b1891cf86e991c51f6b73b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:15:04.544917 kubelet[2691]: E0516 00:15:04.544873 2691 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"39a49e0f4920b78e3903410306265eb9cf8094d129b1891cf86e991c51f6b73b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:15:04.545008 kubelet[2691]: E0516 00:15:04.544940 2691 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"39a49e0f4920b78e3903410306265eb9cf8094d129b1891cf86e991c51f6b73b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-847f5796b-ssmkv" May 16 00:15:04.545008 kubelet[2691]: E0516 00:15:04.544961 2691 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"39a49e0f4920b78e3903410306265eb9cf8094d129b1891cf86e991c51f6b73b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-847f5796b-ssmkv" May 16 00:15:04.545089 kubelet[2691]: E0516 00:15:04.545001 2691 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-847f5796b-ssmkv_calico-system(2cb50f28-04bb-4f6a-ac09-21e960ec8f86)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-847f5796b-ssmkv_calico-system(2cb50f28-04bb-4f6a-ac09-21e960ec8f86)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"39a49e0f4920b78e3903410306265eb9cf8094d129b1891cf86e991c51f6b73b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-847f5796b-ssmkv" podUID="2cb50f28-04bb-4f6a-ac09-21e960ec8f86" May 16 00:15:04.628574 containerd[1508]: time="2025-05-16T00:15:04.628305635Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hz4bz,Uid:cda0ce1d-952c-4f48-bbf6-c0ac3e9334ff,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e968735a5913d29db36cedc90db0e00c1eaf99c34efa28034505678fbbdce7d2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:15:04.628802 kubelet[2691]: E0516 00:15:04.628693 2691 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e968735a5913d29db36cedc90db0e00c1eaf99c34efa28034505678fbbdce7d2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:15:04.628802 kubelet[2691]: E0516 00:15:04.628770 2691 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e968735a5913d29db36cedc90db0e00c1eaf99c34efa28034505678fbbdce7d2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hz4bz" May 16 00:15:04.628802 kubelet[2691]: E0516 00:15:04.628799 2691 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e968735a5913d29db36cedc90db0e00c1eaf99c34efa28034505678fbbdce7d2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hz4bz" May 16 00:15:04.629017 kubelet[2691]: E0516 00:15:04.628864 2691 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-hz4bz_calico-system(cda0ce1d-952c-4f48-bbf6-c0ac3e9334ff)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-hz4bz_calico-system(cda0ce1d-952c-4f48-bbf6-c0ac3e9334ff)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e968735a5913d29db36cedc90db0e00c1eaf99c34efa28034505678fbbdce7d2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-hz4bz" podUID="cda0ce1d-952c-4f48-bbf6-c0ac3e9334ff" May 16 00:15:04.806193 containerd[1508]: time="2025-05-16T00:15:04.806113484Z" level=error msg="Failed to destroy network for sandbox \"ce9c8dc0791deef056bbb12f92e6180142a55e241d79537e6676319e1a429458\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:15:04.818296 sshd[4049]: Connection closed by 10.0.0.1 port 49218 May 16 00:15:04.818658 sshd-session[4046]: pam_unix(sshd:session): session closed for user core May 16 00:15:04.824328 systemd-logind[1490]: Session 14 logged out. Waiting for processes to exit. May 16 00:15:04.825024 systemd[1]: sshd@13-10.0.0.81:22-10.0.0.1:49218.service: Deactivated successfully. May 16 00:15:04.828467 systemd[1]: session-14.scope: Deactivated successfully. May 16 00:15:04.830173 systemd-logind[1490]: Removed session 14. May 16 00:15:04.847641 containerd[1508]: time="2025-05-16T00:15:04.847573559Z" level=error msg="Failed to destroy network for sandbox \"7e7d2f8c1c3bec46f61b78cedbe65bd09e288684e42760f18f38ea9914e447b4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:15:04.870639 containerd[1508]: time="2025-05-16T00:15:04.870567514Z" level=info msg="ImageCreate event name:\"sha256:d12dae9bc0999225efe30fd5618bcf2195709d54ed2840234f5006aab5f7d721\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:15:04.925570 containerd[1508]: time="2025-05-16T00:15:04.925214903Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-78d55f7ddc-d79sc,Uid:b11c09e5-3cbb-468c-abbe-dc156fab2b5c,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ce9c8dc0791deef056bbb12f92e6180142a55e241d79537e6676319e1a429458\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:15:04.925790 kubelet[2691]: E0516 00:15:04.925684 2691 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ce9c8dc0791deef056bbb12f92e6180142a55e241d79537e6676319e1a429458\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:15:04.925790 kubelet[2691]: E0516 00:15:04.925764 2691 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ce9c8dc0791deef056bbb12f92e6180142a55e241d79537e6676319e1a429458\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-78d55f7ddc-d79sc" May 16 00:15:04.925884 kubelet[2691]: E0516 00:15:04.925791 2691 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ce9c8dc0791deef056bbb12f92e6180142a55e241d79537e6676319e1a429458\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-78d55f7ddc-d79sc" May 16 00:15:04.925884 kubelet[2691]: E0516 00:15:04.925844 2691 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-78d55f7ddc-d79sc_calico-system(b11c09e5-3cbb-468c-abbe-dc156fab2b5c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-78d55f7ddc-d79sc_calico-system(b11c09e5-3cbb-468c-abbe-dc156fab2b5c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ce9c8dc0791deef056bbb12f92e6180142a55e241d79537e6676319e1a429458\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-78d55f7ddc-d79sc" podUID="b11c09e5-3cbb-468c-abbe-dc156fab2b5c" May 16 00:15:04.947908 containerd[1508]: time="2025-05-16T00:15:04.947797210Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67fc6f48b5-rr84h,Uid:25e056e5-f7d2-462c-bd3a-fd78b9601b08,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7e7d2f8c1c3bec46f61b78cedbe65bd09e288684e42760f18f38ea9914e447b4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:15:04.948835 kubelet[2691]: E0516 00:15:04.948757 2691 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7e7d2f8c1c3bec46f61b78cedbe65bd09e288684e42760f18f38ea9914e447b4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:15:04.948945 kubelet[2691]: E0516 00:15:04.948837 2691 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7e7d2f8c1c3bec46f61b78cedbe65bd09e288684e42760f18f38ea9914e447b4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-67fc6f48b5-rr84h" May 16 00:15:04.948945 kubelet[2691]: E0516 00:15:04.948862 2691 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7e7d2f8c1c3bec46f61b78cedbe65bd09e288684e42760f18f38ea9914e447b4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-67fc6f48b5-rr84h" May 16 00:15:04.949031 kubelet[2691]: E0516 00:15:04.948922 2691 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-67fc6f48b5-rr84h_calico-apiserver(25e056e5-f7d2-462c-bd3a-fd78b9601b08)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-67fc6f48b5-rr84h_calico-apiserver(25e056e5-f7d2-462c-bd3a-fd78b9601b08)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7e7d2f8c1c3bec46f61b78cedbe65bd09e288684e42760f18f38ea9914e447b4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-67fc6f48b5-rr84h" podUID="25e056e5-f7d2-462c-bd3a-fd78b9601b08" May 16 00:15:05.004144 containerd[1508]: time="2025-05-16T00:15:05.003081158Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:7cb61ea47ca0a8e6d0526a42da4f1e399b37ccd13339d0776d272465cb7ee012\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:15:05.015989 containerd[1508]: time="2025-05-16T00:15:05.005578194Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.0\" with image id \"sha256:d12dae9bc0999225efe30fd5618bcf2195709d54ed2840234f5006aab5f7d721\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/node@sha256:7cb61ea47ca0a8e6d0526a42da4f1e399b37ccd13339d0776d272465cb7ee012\", size \"156396234\" in 15.203207839s" May 16 00:15:05.015989 containerd[1508]: time="2025-05-16T00:15:05.005630221Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.0\" returns image reference \"sha256:d12dae9bc0999225efe30fd5618bcf2195709d54ed2840234f5006aab5f7d721\"" May 16 00:15:05.028335 containerd[1508]: time="2025-05-16T00:15:05.028271470Z" level=info msg="CreateContainer within sandbox \"bf0a011c45417fac4115ac65af9a950a36da1340e03038fc3b6268abe4d7a98c\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" May 16 00:15:05.153464 kubelet[2691]: E0516 00:15:05.150559 2691 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:15:05.153636 containerd[1508]: time="2025-05-16T00:15:05.151050123Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-pm6bh,Uid:6715f7be-e888-4ac0-8c1b-fde56f478f63,Namespace:kube-system,Attempt:0,}" May 16 00:15:05.204990 containerd[1508]: time="2025-05-16T00:15:05.204899369Z" level=info msg="Container 4be196b349c32d750708d16ceafa36f7763b6e3cf74594e4a0ff943728ec6ca5: CDI devices from CRI Config.CDIDevices: []" May 16 00:15:05.508986 containerd[1508]: time="2025-05-16T00:15:05.508214882Z" level=error msg="Failed to destroy network for sandbox \"abd0371ef7f0de78b74fb378b03d51a969712a121784c28059ffa449f1203a2f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:15:05.633814 containerd[1508]: time="2025-05-16T00:15:05.633758182Z" level=info msg="CreateContainer within sandbox \"bf0a011c45417fac4115ac65af9a950a36da1340e03038fc3b6268abe4d7a98c\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"4be196b349c32d750708d16ceafa36f7763b6e3cf74594e4a0ff943728ec6ca5\"" May 16 00:15:05.637284 containerd[1508]: time="2025-05-16T00:15:05.634448882Z" level=info msg="StartContainer for \"4be196b349c32d750708d16ceafa36f7763b6e3cf74594e4a0ff943728ec6ca5\"" May 16 00:15:05.637284 containerd[1508]: time="2025-05-16T00:15:05.636436185Z" level=info msg="connecting to shim 4be196b349c32d750708d16ceafa36f7763b6e3cf74594e4a0ff943728ec6ca5" address="unix:///run/containerd/s/f393a9d25e079fd071fa7e96c15e85946251eeb9c3108b2ec4a95ab5f72b4dbf" protocol=ttrpc version=3 May 16 00:15:05.644567 containerd[1508]: time="2025-05-16T00:15:05.644205730Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-pm6bh,Uid:6715f7be-e888-4ac0-8c1b-fde56f478f63,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"abd0371ef7f0de78b74fb378b03d51a969712a121784c28059ffa449f1203a2f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:15:05.645119 kubelet[2691]: E0516 00:15:05.645061 2691 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"abd0371ef7f0de78b74fb378b03d51a969712a121784c28059ffa449f1203a2f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 00:15:05.645538 kubelet[2691]: E0516 00:15:05.645146 2691 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"abd0371ef7f0de78b74fb378b03d51a969712a121784c28059ffa449f1203a2f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-pm6bh" May 16 00:15:05.645538 kubelet[2691]: E0516 00:15:05.645174 2691 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"abd0371ef7f0de78b74fb378b03d51a969712a121784c28059ffa449f1203a2f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-pm6bh" May 16 00:15:05.645538 kubelet[2691]: E0516 00:15:05.645225 2691 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-pm6bh_kube-system(6715f7be-e888-4ac0-8c1b-fde56f478f63)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-pm6bh_kube-system(6715f7be-e888-4ac0-8c1b-fde56f478f63)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"abd0371ef7f0de78b74fb378b03d51a969712a121784c28059ffa449f1203a2f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-pm6bh" podUID="6715f7be-e888-4ac0-8c1b-fde56f478f63" May 16 00:15:05.685648 systemd[1]: Started cri-containerd-4be196b349c32d750708d16ceafa36f7763b6e3cf74594e4a0ff943728ec6ca5.scope - libcontainer container 4be196b349c32d750708d16ceafa36f7763b6e3cf74594e4a0ff943728ec6ca5. May 16 00:15:05.907140 containerd[1508]: time="2025-05-16T00:15:05.905822580Z" level=info msg="StartContainer for \"4be196b349c32d750708d16ceafa36f7763b6e3cf74594e4a0ff943728ec6ca5\" returns successfully" May 16 00:15:06.001771 systemd[1]: run-netns-cni\x2d0b735ce4\x2df931\x2d8114\x2d1f99\x2d0681a9c8b690.mount: Deactivated successfully. May 16 00:15:06.003328 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2914652112.mount: Deactivated successfully. May 16 00:15:06.015345 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. May 16 00:15:06.018096 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. May 16 00:15:06.864700 kubelet[2691]: I0516 00:15:06.862608 2691 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kzn24\" (UniqueName: \"kubernetes.io/projected/65274ddd-5bfc-4ed3-a962-8824ef50bc83-kube-api-access-kzn24\") pod \"65274ddd-5bfc-4ed3-a962-8824ef50bc83\" (UID: \"65274ddd-5bfc-4ed3-a962-8824ef50bc83\") " May 16 00:15:06.864700 kubelet[2691]: I0516 00:15:06.862680 2691 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/65274ddd-5bfc-4ed3-a962-8824ef50bc83-whisker-ca-bundle\") pod \"65274ddd-5bfc-4ed3-a962-8824ef50bc83\" (UID: \"65274ddd-5bfc-4ed3-a962-8824ef50bc83\") " May 16 00:15:06.864700 kubelet[2691]: I0516 00:15:06.862732 2691 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/65274ddd-5bfc-4ed3-a962-8824ef50bc83-whisker-backend-key-pair\") pod \"65274ddd-5bfc-4ed3-a962-8824ef50bc83\" (UID: \"65274ddd-5bfc-4ed3-a962-8824ef50bc83\") " May 16 00:15:06.864700 kubelet[2691]: I0516 00:15:06.864589 2691 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/65274ddd-5bfc-4ed3-a962-8824ef50bc83-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "65274ddd-5bfc-4ed3-a962-8824ef50bc83" (UID: "65274ddd-5bfc-4ed3-a962-8824ef50bc83"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 16 00:15:06.879606 kubelet[2691]: I0516 00:15:06.878176 2691 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65274ddd-5bfc-4ed3-a962-8824ef50bc83-kube-api-access-kzn24" (OuterVolumeSpecName: "kube-api-access-kzn24") pod "65274ddd-5bfc-4ed3-a962-8824ef50bc83" (UID: "65274ddd-5bfc-4ed3-a962-8824ef50bc83"). InnerVolumeSpecName "kube-api-access-kzn24". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 16 00:15:06.882326 systemd[1]: var-lib-kubelet-pods-65274ddd\x2d5bfc\x2d4ed3\x2da962\x2d8824ef50bc83-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dkzn24.mount: Deactivated successfully. May 16 00:15:06.882880 kubelet[2691]: I0516 00:15:06.882637 2691 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65274ddd-5bfc-4ed3-a962-8824ef50bc83-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "65274ddd-5bfc-4ed3-a962-8824ef50bc83" (UID: "65274ddd-5bfc-4ed3-a962-8824ef50bc83"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 16 00:15:06.883661 systemd[1]: var-lib-kubelet-pods-65274ddd\x2d5bfc\x2d4ed3\x2da962\x2d8824ef50bc83-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. May 16 00:15:06.936669 systemd[1]: Removed slice kubepods-besteffort-pod65274ddd_5bfc_4ed3_a962_8824ef50bc83.slice - libcontainer container kubepods-besteffort-pod65274ddd_5bfc_4ed3_a962_8824ef50bc83.slice. May 16 00:15:06.963809 kubelet[2691]: I0516 00:15:06.963756 2691 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/65274ddd-5bfc-4ed3-a962-8824ef50bc83-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" May 16 00:15:06.963809 kubelet[2691]: I0516 00:15:06.963793 2691 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/65274ddd-5bfc-4ed3-a962-8824ef50bc83-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" May 16 00:15:06.963809 kubelet[2691]: I0516 00:15:06.963807 2691 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-kzn24\" (UniqueName: \"kubernetes.io/projected/65274ddd-5bfc-4ed3-a962-8824ef50bc83-kube-api-access-kzn24\") on node \"localhost\" DevicePath \"\"" May 16 00:15:06.989035 kubelet[2691]: I0516 00:15:06.988942 2691 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-8pxfz" podStartSLOduration=7.218260703 podStartE2EDuration="35.988922128s" podCreationTimestamp="2025-05-16 00:14:31 +0000 UTC" firstStartedPulling="2025-05-16 00:14:36.246259428 +0000 UTC m=+30.186713171" lastFinishedPulling="2025-05-16 00:15:05.016920844 +0000 UTC m=+58.957374596" observedRunningTime="2025-05-16 00:15:06.986118671 +0000 UTC m=+60.926572424" watchObservedRunningTime="2025-05-16 00:15:06.988922128 +0000 UTC m=+60.929375870" May 16 00:15:07.113545 containerd[1508]: time="2025-05-16T00:15:07.113485855Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4be196b349c32d750708d16ceafa36f7763b6e3cf74594e4a0ff943728ec6ca5\" id:\"b3dd40d2cb1b7d1dcb8f56859ba75d26923ab682fd973912e67baf0beef8fc5f\" pid:4226 exit_status:1 exited_at:{seconds:1747354507 nanos:112965439}" May 16 00:15:08.079797 containerd[1508]: time="2025-05-16T00:15:08.079748309Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4be196b349c32d750708d16ceafa36f7763b6e3cf74594e4a0ff943728ec6ca5\" id:\"1413923c9ae4a2dddf891d19d699a8633f503177d8d25e87733db6c4994f7117\" pid:4262 exit_status:1 exited_at:{seconds:1747354508 nanos:79324193}" May 16 00:15:08.159086 kubelet[2691]: I0516 00:15:08.159013 2691 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="65274ddd-5bfc-4ed3-a962-8824ef50bc83" path="/var/lib/kubelet/pods/65274ddd-5bfc-4ed3-a962-8824ef50bc83/volumes" May 16 00:15:09.041599 kernel: bpftool[4402]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set May 16 00:15:09.389739 systemd-networkd[1420]: vxlan.calico: Link UP May 16 00:15:09.389747 systemd-networkd[1420]: vxlan.calico: Gained carrier May 16 00:15:09.838177 systemd[1]: Started sshd@14-10.0.0.81:22-10.0.0.1:46130.service - OpenSSH per-connection server daemon (10.0.0.1:46130). May 16 00:15:09.892384 sshd[4479]: Accepted publickey for core from 10.0.0.1 port 46130 ssh2: RSA SHA256:VClJ7K4P6c9/rg+rJGjy/BA6Vatbs9UvxKJcX8Q7ZgI May 16 00:15:09.894904 sshd-session[4479]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:15:09.900107 systemd-logind[1490]: New session 15 of user core. May 16 00:15:09.910541 systemd[1]: Started session-15.scope - Session 15 of User core. May 16 00:15:10.047657 sshd[4481]: Connection closed by 10.0.0.1 port 46130 May 16 00:15:10.048015 sshd-session[4479]: pam_unix(sshd:session): session closed for user core May 16 00:15:10.052436 systemd[1]: sshd@14-10.0.0.81:22-10.0.0.1:46130.service: Deactivated successfully. May 16 00:15:10.054736 systemd[1]: session-15.scope: Deactivated successfully. May 16 00:15:10.055604 systemd-logind[1490]: Session 15 logged out. Waiting for processes to exit. May 16 00:15:10.056789 systemd-logind[1490]: Removed session 15. May 16 00:15:10.646653 systemd-networkd[1420]: vxlan.calico: Gained IPv6LL May 16 00:15:15.096058 systemd[1]: Started sshd@15-10.0.0.81:22-10.0.0.1:46134.service - OpenSSH per-connection server daemon (10.0.0.1:46134). May 16 00:15:15.152456 containerd[1508]: time="2025-05-16T00:15:15.151316214Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67fc6f48b5-qdbqj,Uid:0d77b5c7-302f-48ea-88fb-1eaa21477648,Namespace:calico-apiserver,Attempt:0,}" May 16 00:15:15.220770 sshd[4507]: Accepted publickey for core from 10.0.0.1 port 46134 ssh2: RSA SHA256:VClJ7K4P6c9/rg+rJGjy/BA6Vatbs9UvxKJcX8Q7ZgI May 16 00:15:15.221609 sshd-session[4507]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:15:15.238234 systemd-logind[1490]: New session 16 of user core. May 16 00:15:15.263713 systemd[1]: Started session-16.scope - Session 16 of User core. May 16 00:15:15.611679 sshd[4521]: Connection closed by 10.0.0.1 port 46134 May 16 00:15:15.613051 sshd-session[4507]: pam_unix(sshd:session): session closed for user core May 16 00:15:15.620648 systemd[1]: sshd@15-10.0.0.81:22-10.0.0.1:46134.service: Deactivated successfully. May 16 00:15:15.624237 systemd[1]: session-16.scope: Deactivated successfully. May 16 00:15:15.627906 systemd-logind[1490]: Session 16 logged out. Waiting for processes to exit. May 16 00:15:15.631611 systemd-logind[1490]: Removed session 16. May 16 00:15:16.329474 systemd-networkd[1420]: cali3f06669d655: Link UP May 16 00:15:16.331465 systemd-networkd[1420]: cali3f06669d655: Gained carrier May 16 00:15:16.636269 containerd[1508]: 2025-05-16 00:15:15.385 [INFO][4510] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--67fc6f48b5--qdbqj-eth0 calico-apiserver-67fc6f48b5- calico-apiserver 0d77b5c7-302f-48ea-88fb-1eaa21477648 868 0 2025-05-16 00:14:29 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:67fc6f48b5 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-67fc6f48b5-qdbqj eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali3f06669d655 [] [] }} ContainerID="4691a9733e96b9970f1991d24e1759ccbbdcff5fa5c7917b79987ed953f7e5eb" Namespace="calico-apiserver" Pod="calico-apiserver-67fc6f48b5-qdbqj" WorkloadEndpoint="localhost-k8s-calico--apiserver--67fc6f48b5--qdbqj-" May 16 00:15:16.636269 containerd[1508]: 2025-05-16 00:15:15.386 [INFO][4510] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4691a9733e96b9970f1991d24e1759ccbbdcff5fa5c7917b79987ed953f7e5eb" Namespace="calico-apiserver" Pod="calico-apiserver-67fc6f48b5-qdbqj" WorkloadEndpoint="localhost-k8s-calico--apiserver--67fc6f48b5--qdbqj-eth0" May 16 00:15:16.636269 containerd[1508]: 2025-05-16 00:15:15.598 [INFO][4531] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4691a9733e96b9970f1991d24e1759ccbbdcff5fa5c7917b79987ed953f7e5eb" HandleID="k8s-pod-network.4691a9733e96b9970f1991d24e1759ccbbdcff5fa5c7917b79987ed953f7e5eb" Workload="localhost-k8s-calico--apiserver--67fc6f48b5--qdbqj-eth0" May 16 00:15:16.636830 containerd[1508]: 2025-05-16 00:15:15.601 [INFO][4531] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4691a9733e96b9970f1991d24e1759ccbbdcff5fa5c7917b79987ed953f7e5eb" HandleID="k8s-pod-network.4691a9733e96b9970f1991d24e1759ccbbdcff5fa5c7917b79987ed953f7e5eb" Workload="localhost-k8s-calico--apiserver--67fc6f48b5--qdbqj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004ead0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-67fc6f48b5-qdbqj", "timestamp":"2025-05-16 00:15:15.598073263 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 16 00:15:16.636830 containerd[1508]: 2025-05-16 00:15:15.601 [INFO][4531] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 16 00:15:16.636830 containerd[1508]: 2025-05-16 00:15:15.602 [INFO][4531] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 16 00:15:16.636830 containerd[1508]: 2025-05-16 00:15:15.602 [INFO][4531] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 16 00:15:16.636830 containerd[1508]: 2025-05-16 00:15:15.620 [INFO][4531] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4691a9733e96b9970f1991d24e1759ccbbdcff5fa5c7917b79987ed953f7e5eb" host="localhost" May 16 00:15:16.636830 containerd[1508]: 2025-05-16 00:15:15.642 [INFO][4531] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" May 16 00:15:16.636830 containerd[1508]: 2025-05-16 00:15:15.655 [INFO][4531] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" May 16 00:15:16.636830 containerd[1508]: 2025-05-16 00:15:15.660 [INFO][4531] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 16 00:15:16.636830 containerd[1508]: 2025-05-16 00:15:15.666 [INFO][4531] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 16 00:15:16.636830 containerd[1508]: 2025-05-16 00:15:15.666 [INFO][4531] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4691a9733e96b9970f1991d24e1759ccbbdcff5fa5c7917b79987ed953f7e5eb" host="localhost" May 16 00:15:16.637087 containerd[1508]: 2025-05-16 00:15:15.672 [INFO][4531] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.4691a9733e96b9970f1991d24e1759ccbbdcff5fa5c7917b79987ed953f7e5eb May 16 00:15:16.637087 containerd[1508]: 2025-05-16 00:15:15.775 [INFO][4531] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4691a9733e96b9970f1991d24e1759ccbbdcff5fa5c7917b79987ed953f7e5eb" host="localhost" May 16 00:15:16.637087 containerd[1508]: 2025-05-16 00:15:16.057 [INFO][4531] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.4691a9733e96b9970f1991d24e1759ccbbdcff5fa5c7917b79987ed953f7e5eb" host="localhost" May 16 00:15:16.637087 containerd[1508]: 2025-05-16 00:15:16.057 [INFO][4531] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.4691a9733e96b9970f1991d24e1759ccbbdcff5fa5c7917b79987ed953f7e5eb" host="localhost" May 16 00:15:16.637087 containerd[1508]: 2025-05-16 00:15:16.057 [INFO][4531] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 16 00:15:16.637087 containerd[1508]: 2025-05-16 00:15:16.057 [INFO][4531] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="4691a9733e96b9970f1991d24e1759ccbbdcff5fa5c7917b79987ed953f7e5eb" HandleID="k8s-pod-network.4691a9733e96b9970f1991d24e1759ccbbdcff5fa5c7917b79987ed953f7e5eb" Workload="localhost-k8s-calico--apiserver--67fc6f48b5--qdbqj-eth0" May 16 00:15:16.637262 containerd[1508]: 2025-05-16 00:15:16.061 [INFO][4510] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4691a9733e96b9970f1991d24e1759ccbbdcff5fa5c7917b79987ed953f7e5eb" Namespace="calico-apiserver" Pod="calico-apiserver-67fc6f48b5-qdbqj" WorkloadEndpoint="localhost-k8s-calico--apiserver--67fc6f48b5--qdbqj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--67fc6f48b5--qdbqj-eth0", GenerateName:"calico-apiserver-67fc6f48b5-", Namespace:"calico-apiserver", SelfLink:"", UID:"0d77b5c7-302f-48ea-88fb-1eaa21477648", ResourceVersion:"868", Generation:0, CreationTimestamp:time.Date(2025, time.May, 16, 0, 14, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"67fc6f48b5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-67fc6f48b5-qdbqj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3f06669d655", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 16 00:15:16.637326 containerd[1508]: 2025-05-16 00:15:16.061 [INFO][4510] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="4691a9733e96b9970f1991d24e1759ccbbdcff5fa5c7917b79987ed953f7e5eb" Namespace="calico-apiserver" Pod="calico-apiserver-67fc6f48b5-qdbqj" WorkloadEndpoint="localhost-k8s-calico--apiserver--67fc6f48b5--qdbqj-eth0" May 16 00:15:16.637326 containerd[1508]: 2025-05-16 00:15:16.061 [INFO][4510] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3f06669d655 ContainerID="4691a9733e96b9970f1991d24e1759ccbbdcff5fa5c7917b79987ed953f7e5eb" Namespace="calico-apiserver" Pod="calico-apiserver-67fc6f48b5-qdbqj" WorkloadEndpoint="localhost-k8s-calico--apiserver--67fc6f48b5--qdbqj-eth0" May 16 00:15:16.637326 containerd[1508]: 2025-05-16 00:15:16.331 [INFO][4510] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4691a9733e96b9970f1991d24e1759ccbbdcff5fa5c7917b79987ed953f7e5eb" Namespace="calico-apiserver" Pod="calico-apiserver-67fc6f48b5-qdbqj" WorkloadEndpoint="localhost-k8s-calico--apiserver--67fc6f48b5--qdbqj-eth0" May 16 00:15:16.638474 containerd[1508]: 2025-05-16 00:15:16.333 [INFO][4510] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4691a9733e96b9970f1991d24e1759ccbbdcff5fa5c7917b79987ed953f7e5eb" Namespace="calico-apiserver" Pod="calico-apiserver-67fc6f48b5-qdbqj" WorkloadEndpoint="localhost-k8s-calico--apiserver--67fc6f48b5--qdbqj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--67fc6f48b5--qdbqj-eth0", GenerateName:"calico-apiserver-67fc6f48b5-", Namespace:"calico-apiserver", SelfLink:"", UID:"0d77b5c7-302f-48ea-88fb-1eaa21477648", ResourceVersion:"868", Generation:0, CreationTimestamp:time.Date(2025, time.May, 16, 0, 14, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"67fc6f48b5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4691a9733e96b9970f1991d24e1759ccbbdcff5fa5c7917b79987ed953f7e5eb", Pod:"calico-apiserver-67fc6f48b5-qdbqj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3f06669d655", MAC:"9a:70:77:97:65:3a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 16 00:15:16.638539 containerd[1508]: 2025-05-16 00:15:16.631 [INFO][4510] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4691a9733e96b9970f1991d24e1759ccbbdcff5fa5c7917b79987ed953f7e5eb" Namespace="calico-apiserver" Pod="calico-apiserver-67fc6f48b5-qdbqj" WorkloadEndpoint="localhost-k8s-calico--apiserver--67fc6f48b5--qdbqj-eth0" May 16 00:15:17.149338 kubelet[2691]: E0516 00:15:17.149212 2691 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:15:17.149338 kubelet[2691]: E0516 00:15:17.149235 2691 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:15:17.149878 containerd[1508]: time="2025-05-16T00:15:17.149780886Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-7zxch,Uid:5e130f73-1c5a-4996-b6b9-e20047051b8e,Namespace:kube-system,Attempt:0,}" May 16 00:15:17.150420 containerd[1508]: time="2025-05-16T00:15:17.150334177Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-pm6bh,Uid:6715f7be-e888-4ac0-8c1b-fde56f478f63,Namespace:kube-system,Attempt:0,}" May 16 00:15:18.070568 systemd-networkd[1420]: cali3f06669d655: Gained IPv6LL May 16 00:15:18.150008 containerd[1508]: time="2025-05-16T00:15:18.149855204Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hz4bz,Uid:cda0ce1d-952c-4f48-bbf6-c0ac3e9334ff,Namespace:calico-system,Attempt:0,}" May 16 00:15:18.150008 containerd[1508]: time="2025-05-16T00:15:18.149948502Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-847f5796b-ssmkv,Uid:2cb50f28-04bb-4f6a-ac09-21e960ec8f86,Namespace:calico-system,Attempt:0,}" May 16 00:15:18.150618 containerd[1508]: time="2025-05-16T00:15:18.150118405Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-78d55f7ddc-d79sc,Uid:b11c09e5-3cbb-468c-abbe-dc156fab2b5c,Namespace:calico-system,Attempt:0,}" May 16 00:15:18.251824 systemd-networkd[1420]: cali9bcc65bc5b3: Link UP May 16 00:15:18.252312 systemd-networkd[1420]: cali9bcc65bc5b3: Gained carrier May 16 00:15:18.441989 containerd[1508]: 2025-05-16 00:15:17.772 [INFO][4562] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--7zxch-eth0 coredns-668d6bf9bc- kube-system 5e130f73-1c5a-4996-b6b9-e20047051b8e 854 0 2025-05-16 00:14:12 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-7zxch eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali9bcc65bc5b3 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="0ed54bf2263aa8026f40d8b1d1fcd461e229557d2d95ecad9dcea49ccbb54697" Namespace="kube-system" Pod="coredns-668d6bf9bc-7zxch" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--7zxch-" May 16 00:15:18.441989 containerd[1508]: 2025-05-16 00:15:17.772 [INFO][4562] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0ed54bf2263aa8026f40d8b1d1fcd461e229557d2d95ecad9dcea49ccbb54697" Namespace="kube-system" Pod="coredns-668d6bf9bc-7zxch" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--7zxch-eth0" May 16 00:15:18.441989 containerd[1508]: 2025-05-16 00:15:17.793 [INFO][4576] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0ed54bf2263aa8026f40d8b1d1fcd461e229557d2d95ecad9dcea49ccbb54697" HandleID="k8s-pod-network.0ed54bf2263aa8026f40d8b1d1fcd461e229557d2d95ecad9dcea49ccbb54697" Workload="localhost-k8s-coredns--668d6bf9bc--7zxch-eth0" May 16 00:15:18.442234 containerd[1508]: 2025-05-16 00:15:17.793 [INFO][4576] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0ed54bf2263aa8026f40d8b1d1fcd461e229557d2d95ecad9dcea49ccbb54697" HandleID="k8s-pod-network.0ed54bf2263aa8026f40d8b1d1fcd461e229557d2d95ecad9dcea49ccbb54697" Workload="localhost-k8s-coredns--668d6bf9bc--7zxch-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001337d0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-7zxch", "timestamp":"2025-05-16 00:15:17.793440734 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 16 00:15:18.442234 containerd[1508]: 2025-05-16 00:15:17.793 [INFO][4576] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 16 00:15:18.442234 containerd[1508]: 2025-05-16 00:15:17.793 [INFO][4576] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 16 00:15:18.442234 containerd[1508]: 2025-05-16 00:15:17.794 [INFO][4576] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 16 00:15:18.442234 containerd[1508]: 2025-05-16 00:15:17.800 [INFO][4576] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0ed54bf2263aa8026f40d8b1d1fcd461e229557d2d95ecad9dcea49ccbb54697" host="localhost" May 16 00:15:18.442234 containerd[1508]: 2025-05-16 00:15:17.804 [INFO][4576] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" May 16 00:15:18.442234 containerd[1508]: 2025-05-16 00:15:17.808 [INFO][4576] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" May 16 00:15:18.442234 containerd[1508]: 2025-05-16 00:15:17.809 [INFO][4576] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 16 00:15:18.442234 containerd[1508]: 2025-05-16 00:15:17.811 [INFO][4576] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 16 00:15:18.442234 containerd[1508]: 2025-05-16 00:15:17.811 [INFO][4576] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.0ed54bf2263aa8026f40d8b1d1fcd461e229557d2d95ecad9dcea49ccbb54697" host="localhost" May 16 00:15:18.442767 containerd[1508]: 2025-05-16 00:15:17.813 [INFO][4576] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.0ed54bf2263aa8026f40d8b1d1fcd461e229557d2d95ecad9dcea49ccbb54697 May 16 00:15:18.442767 containerd[1508]: 2025-05-16 00:15:18.144 [INFO][4576] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.0ed54bf2263aa8026f40d8b1d1fcd461e229557d2d95ecad9dcea49ccbb54697" host="localhost" May 16 00:15:18.442767 containerd[1508]: 2025-05-16 00:15:18.244 [INFO][4576] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.0ed54bf2263aa8026f40d8b1d1fcd461e229557d2d95ecad9dcea49ccbb54697" host="localhost" May 16 00:15:18.442767 containerd[1508]: 2025-05-16 00:15:18.244 [INFO][4576] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.0ed54bf2263aa8026f40d8b1d1fcd461e229557d2d95ecad9dcea49ccbb54697" host="localhost" May 16 00:15:18.442767 containerd[1508]: 2025-05-16 00:15:18.244 [INFO][4576] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 16 00:15:18.442767 containerd[1508]: 2025-05-16 00:15:18.244 [INFO][4576] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="0ed54bf2263aa8026f40d8b1d1fcd461e229557d2d95ecad9dcea49ccbb54697" HandleID="k8s-pod-network.0ed54bf2263aa8026f40d8b1d1fcd461e229557d2d95ecad9dcea49ccbb54697" Workload="localhost-k8s-coredns--668d6bf9bc--7zxch-eth0" May 16 00:15:18.442933 containerd[1508]: 2025-05-16 00:15:18.248 [INFO][4562] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0ed54bf2263aa8026f40d8b1d1fcd461e229557d2d95ecad9dcea49ccbb54697" Namespace="kube-system" Pod="coredns-668d6bf9bc-7zxch" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--7zxch-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--7zxch-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"5e130f73-1c5a-4996-b6b9-e20047051b8e", ResourceVersion:"854", Generation:0, CreationTimestamp:time.Date(2025, time.May, 16, 0, 14, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-7zxch", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9bcc65bc5b3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 16 00:15:18.443023 containerd[1508]: 2025-05-16 00:15:18.248 [INFO][4562] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="0ed54bf2263aa8026f40d8b1d1fcd461e229557d2d95ecad9dcea49ccbb54697" Namespace="kube-system" Pod="coredns-668d6bf9bc-7zxch" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--7zxch-eth0" May 16 00:15:18.443023 containerd[1508]: 2025-05-16 00:15:18.248 [INFO][4562] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9bcc65bc5b3 ContainerID="0ed54bf2263aa8026f40d8b1d1fcd461e229557d2d95ecad9dcea49ccbb54697" Namespace="kube-system" Pod="coredns-668d6bf9bc-7zxch" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--7zxch-eth0" May 16 00:15:18.443023 containerd[1508]: 2025-05-16 00:15:18.250 [INFO][4562] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0ed54bf2263aa8026f40d8b1d1fcd461e229557d2d95ecad9dcea49ccbb54697" Namespace="kube-system" Pod="coredns-668d6bf9bc-7zxch" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--7zxch-eth0" May 16 00:15:18.443118 containerd[1508]: 2025-05-16 00:15:18.251 [INFO][4562] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0ed54bf2263aa8026f40d8b1d1fcd461e229557d2d95ecad9dcea49ccbb54697" Namespace="kube-system" Pod="coredns-668d6bf9bc-7zxch" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--7zxch-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--7zxch-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"5e130f73-1c5a-4996-b6b9-e20047051b8e", ResourceVersion:"854", Generation:0, CreationTimestamp:time.Date(2025, time.May, 16, 0, 14, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0ed54bf2263aa8026f40d8b1d1fcd461e229557d2d95ecad9dcea49ccbb54697", Pod:"coredns-668d6bf9bc-7zxch", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9bcc65bc5b3", MAC:"96:09:48:36:2c:a7", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 16 00:15:18.443118 containerd[1508]: 2025-05-16 00:15:18.439 [INFO][4562] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0ed54bf2263aa8026f40d8b1d1fcd461e229557d2d95ecad9dcea49ccbb54697" Namespace="kube-system" Pod="coredns-668d6bf9bc-7zxch" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--7zxch-eth0" May 16 00:15:18.658202 systemd-networkd[1420]: cali85111c5c03d: Link UP May 16 00:15:18.659181 systemd-networkd[1420]: cali85111c5c03d: Gained carrier May 16 00:15:19.133415 containerd[1508]: 2025-05-16 00:15:18.147 [INFO][4584] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--pm6bh-eth0 coredns-668d6bf9bc- kube-system 6715f7be-e888-4ac0-8c1b-fde56f478f63 850 0 2025-05-16 00:14:12 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-pm6bh eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali85111c5c03d [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="684f8c64c8a24c52e0af5bf32578c24ac80c01a39fa072f1c9a28b54fd7b75ef" Namespace="kube-system" Pod="coredns-668d6bf9bc-pm6bh" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--pm6bh-" May 16 00:15:19.133415 containerd[1508]: 2025-05-16 00:15:18.147 [INFO][4584] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="684f8c64c8a24c52e0af5bf32578c24ac80c01a39fa072f1c9a28b54fd7b75ef" Namespace="kube-system" Pod="coredns-668d6bf9bc-pm6bh" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--pm6bh-eth0" May 16 00:15:19.133415 containerd[1508]: 2025-05-16 00:15:18.394 [INFO][4602] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="684f8c64c8a24c52e0af5bf32578c24ac80c01a39fa072f1c9a28b54fd7b75ef" HandleID="k8s-pod-network.684f8c64c8a24c52e0af5bf32578c24ac80c01a39fa072f1c9a28b54fd7b75ef" Workload="localhost-k8s-coredns--668d6bf9bc--pm6bh-eth0" May 16 00:15:19.133415 containerd[1508]: 2025-05-16 00:15:18.394 [INFO][4602] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="684f8c64c8a24c52e0af5bf32578c24ac80c01a39fa072f1c9a28b54fd7b75ef" HandleID="k8s-pod-network.684f8c64c8a24c52e0af5bf32578c24ac80c01a39fa072f1c9a28b54fd7b75ef" Workload="localhost-k8s-coredns--668d6bf9bc--pm6bh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000135f70), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-pm6bh", "timestamp":"2025-05-16 00:15:18.394221402 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 16 00:15:19.133415 containerd[1508]: 2025-05-16 00:15:18.394 [INFO][4602] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 16 00:15:19.133415 containerd[1508]: 2025-05-16 00:15:18.394 [INFO][4602] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 16 00:15:19.133415 containerd[1508]: 2025-05-16 00:15:18.394 [INFO][4602] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 16 00:15:19.133415 containerd[1508]: 2025-05-16 00:15:18.422 [INFO][4602] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.684f8c64c8a24c52e0af5bf32578c24ac80c01a39fa072f1c9a28b54fd7b75ef" host="localhost" May 16 00:15:19.133415 containerd[1508]: 2025-05-16 00:15:18.426 [INFO][4602] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" May 16 00:15:19.133415 containerd[1508]: 2025-05-16 00:15:18.431 [INFO][4602] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" May 16 00:15:19.133415 containerd[1508]: 2025-05-16 00:15:18.432 [INFO][4602] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 16 00:15:19.133415 containerd[1508]: 2025-05-16 00:15:18.435 [INFO][4602] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 16 00:15:19.133415 containerd[1508]: 2025-05-16 00:15:18.435 [INFO][4602] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.684f8c64c8a24c52e0af5bf32578c24ac80c01a39fa072f1c9a28b54fd7b75ef" host="localhost" May 16 00:15:19.133415 containerd[1508]: 2025-05-16 00:15:18.436 [INFO][4602] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.684f8c64c8a24c52e0af5bf32578c24ac80c01a39fa072f1c9a28b54fd7b75ef May 16 00:15:19.133415 containerd[1508]: 2025-05-16 00:15:18.464 [INFO][4602] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.684f8c64c8a24c52e0af5bf32578c24ac80c01a39fa072f1c9a28b54fd7b75ef" host="localhost" May 16 00:15:19.133415 containerd[1508]: 2025-05-16 00:15:18.652 [INFO][4602] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.684f8c64c8a24c52e0af5bf32578c24ac80c01a39fa072f1c9a28b54fd7b75ef" host="localhost" May 16 00:15:19.133415 containerd[1508]: 2025-05-16 00:15:18.652 [INFO][4602] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.684f8c64c8a24c52e0af5bf32578c24ac80c01a39fa072f1c9a28b54fd7b75ef" host="localhost" May 16 00:15:19.133415 containerd[1508]: 2025-05-16 00:15:18.652 [INFO][4602] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 16 00:15:19.133415 containerd[1508]: 2025-05-16 00:15:18.652 [INFO][4602] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="684f8c64c8a24c52e0af5bf32578c24ac80c01a39fa072f1c9a28b54fd7b75ef" HandleID="k8s-pod-network.684f8c64c8a24c52e0af5bf32578c24ac80c01a39fa072f1c9a28b54fd7b75ef" Workload="localhost-k8s-coredns--668d6bf9bc--pm6bh-eth0" May 16 00:15:19.134038 containerd[1508]: 2025-05-16 00:15:18.655 [INFO][4584] cni-plugin/k8s.go 418: Populated endpoint ContainerID="684f8c64c8a24c52e0af5bf32578c24ac80c01a39fa072f1c9a28b54fd7b75ef" Namespace="kube-system" Pod="coredns-668d6bf9bc-pm6bh" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--pm6bh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--pm6bh-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"6715f7be-e888-4ac0-8c1b-fde56f478f63", ResourceVersion:"850", Generation:0, CreationTimestamp:time.Date(2025, time.May, 16, 0, 14, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-pm6bh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali85111c5c03d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 16 00:15:19.134038 containerd[1508]: 2025-05-16 00:15:18.655 [INFO][4584] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="684f8c64c8a24c52e0af5bf32578c24ac80c01a39fa072f1c9a28b54fd7b75ef" Namespace="kube-system" Pod="coredns-668d6bf9bc-pm6bh" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--pm6bh-eth0" May 16 00:15:19.134038 containerd[1508]: 2025-05-16 00:15:18.655 [INFO][4584] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali85111c5c03d ContainerID="684f8c64c8a24c52e0af5bf32578c24ac80c01a39fa072f1c9a28b54fd7b75ef" Namespace="kube-system" Pod="coredns-668d6bf9bc-pm6bh" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--pm6bh-eth0" May 16 00:15:19.134038 containerd[1508]: 2025-05-16 00:15:18.658 [INFO][4584] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="684f8c64c8a24c52e0af5bf32578c24ac80c01a39fa072f1c9a28b54fd7b75ef" Namespace="kube-system" Pod="coredns-668d6bf9bc-pm6bh" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--pm6bh-eth0" May 16 00:15:19.134038 containerd[1508]: 2025-05-16 00:15:18.658 [INFO][4584] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="684f8c64c8a24c52e0af5bf32578c24ac80c01a39fa072f1c9a28b54fd7b75ef" Namespace="kube-system" Pod="coredns-668d6bf9bc-pm6bh" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--pm6bh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--pm6bh-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"6715f7be-e888-4ac0-8c1b-fde56f478f63", ResourceVersion:"850", Generation:0, CreationTimestamp:time.Date(2025, time.May, 16, 0, 14, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"684f8c64c8a24c52e0af5bf32578c24ac80c01a39fa072f1c9a28b54fd7b75ef", Pod:"coredns-668d6bf9bc-pm6bh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali85111c5c03d", MAC:"be:b6:38:fa:c8:6d", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 16 00:15:19.134038 containerd[1508]: 2025-05-16 00:15:19.129 [INFO][4584] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="684f8c64c8a24c52e0af5bf32578c24ac80c01a39fa072f1c9a28b54fd7b75ef" Namespace="kube-system" Pod="coredns-668d6bf9bc-pm6bh" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--pm6bh-eth0" May 16 00:15:19.275041 containerd[1508]: time="2025-05-16T00:15:19.274982801Z" level=info msg="connecting to shim 4691a9733e96b9970f1991d24e1759ccbbdcff5fa5c7917b79987ed953f7e5eb" address="unix:///run/containerd/s/1898c6725149d0e97c122ce7b09ad9015a14438f662a39a027be002aebfee9e5" namespace=k8s.io protocol=ttrpc version=3 May 16 00:15:19.354545 systemd[1]: Started cri-containerd-4691a9733e96b9970f1991d24e1759ccbbdcff5fa5c7917b79987ed953f7e5eb.scope - libcontainer container 4691a9733e96b9970f1991d24e1759ccbbdcff5fa5c7917b79987ed953f7e5eb. May 16 00:15:19.366264 systemd-resolved[1349]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 16 00:15:19.734502 systemd-networkd[1420]: cali9bcc65bc5b3: Gained IPv6LL May 16 00:15:19.926595 systemd-networkd[1420]: cali85111c5c03d: Gained IPv6LL May 16 00:15:20.095225 systemd-networkd[1420]: cali2b25870918e: Link UP May 16 00:15:20.095478 systemd-networkd[1420]: cali2b25870918e: Gained carrier May 16 00:15:20.151085 containerd[1508]: time="2025-05-16T00:15:20.151015974Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67fc6f48b5-rr84h,Uid:25e056e5-f7d2-462c-bd3a-fd78b9601b08,Namespace:calico-apiserver,Attempt:0,}" May 16 00:15:20.392748 containerd[1508]: 2025-05-16 00:15:19.651 [INFO][4668] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--847f5796b--ssmkv-eth0 calico-kube-controllers-847f5796b- calico-system 2cb50f28-04bb-4f6a-ac09-21e960ec8f86 860 0 2025-05-16 00:14:32 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:847f5796b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-847f5796b-ssmkv eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali2b25870918e [] [] }} ContainerID="2ea26774259486b1e7201a46b129099deb55c69e3a5e7de21b970ad40916916a" Namespace="calico-system" Pod="calico-kube-controllers-847f5796b-ssmkv" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--847f5796b--ssmkv-" May 16 00:15:20.392748 containerd[1508]: 2025-05-16 00:15:19.651 [INFO][4668] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2ea26774259486b1e7201a46b129099deb55c69e3a5e7de21b970ad40916916a" Namespace="calico-system" Pod="calico-kube-controllers-847f5796b-ssmkv" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--847f5796b--ssmkv-eth0" May 16 00:15:20.392748 containerd[1508]: 2025-05-16 00:15:19.675 [INFO][4707] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2ea26774259486b1e7201a46b129099deb55c69e3a5e7de21b970ad40916916a" HandleID="k8s-pod-network.2ea26774259486b1e7201a46b129099deb55c69e3a5e7de21b970ad40916916a" Workload="localhost-k8s-calico--kube--controllers--847f5796b--ssmkv-eth0" May 16 00:15:20.392748 containerd[1508]: 2025-05-16 00:15:19.675 [INFO][4707] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2ea26774259486b1e7201a46b129099deb55c69e3a5e7de21b970ad40916916a" HandleID="k8s-pod-network.2ea26774259486b1e7201a46b129099deb55c69e3a5e7de21b970ad40916916a" Workload="localhost-k8s-calico--kube--controllers--847f5796b--ssmkv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ad010), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-847f5796b-ssmkv", "timestamp":"2025-05-16 00:15:19.674989129 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 16 00:15:20.392748 containerd[1508]: 2025-05-16 00:15:19.675 [INFO][4707] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 16 00:15:20.392748 containerd[1508]: 2025-05-16 00:15:19.675 [INFO][4707] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 16 00:15:20.392748 containerd[1508]: 2025-05-16 00:15:19.675 [INFO][4707] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 16 00:15:20.392748 containerd[1508]: 2025-05-16 00:15:19.681 [INFO][4707] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2ea26774259486b1e7201a46b129099deb55c69e3a5e7de21b970ad40916916a" host="localhost" May 16 00:15:20.392748 containerd[1508]: 2025-05-16 00:15:19.685 [INFO][4707] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" May 16 00:15:20.392748 containerd[1508]: 2025-05-16 00:15:19.691 [INFO][4707] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" May 16 00:15:20.392748 containerd[1508]: 2025-05-16 00:15:19.693 [INFO][4707] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 16 00:15:20.392748 containerd[1508]: 2025-05-16 00:15:19.695 [INFO][4707] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 16 00:15:20.392748 containerd[1508]: 2025-05-16 00:15:19.695 [INFO][4707] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2ea26774259486b1e7201a46b129099deb55c69e3a5e7de21b970ad40916916a" host="localhost" May 16 00:15:20.392748 containerd[1508]: 2025-05-16 00:15:19.696 [INFO][4707] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.2ea26774259486b1e7201a46b129099deb55c69e3a5e7de21b970ad40916916a May 16 00:15:20.392748 containerd[1508]: 2025-05-16 00:15:19.795 [INFO][4707] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2ea26774259486b1e7201a46b129099deb55c69e3a5e7de21b970ad40916916a" host="localhost" May 16 00:15:20.392748 containerd[1508]: 2025-05-16 00:15:20.088 [INFO][4707] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.2ea26774259486b1e7201a46b129099deb55c69e3a5e7de21b970ad40916916a" host="localhost" May 16 00:15:20.392748 containerd[1508]: 2025-05-16 00:15:20.088 [INFO][4707] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.2ea26774259486b1e7201a46b129099deb55c69e3a5e7de21b970ad40916916a" host="localhost" May 16 00:15:20.392748 containerd[1508]: 2025-05-16 00:15:20.088 [INFO][4707] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 16 00:15:20.392748 containerd[1508]: 2025-05-16 00:15:20.088 [INFO][4707] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="2ea26774259486b1e7201a46b129099deb55c69e3a5e7de21b970ad40916916a" HandleID="k8s-pod-network.2ea26774259486b1e7201a46b129099deb55c69e3a5e7de21b970ad40916916a" Workload="localhost-k8s-calico--kube--controllers--847f5796b--ssmkv-eth0" May 16 00:15:20.394868 containerd[1508]: 2025-05-16 00:15:20.091 [INFO][4668] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2ea26774259486b1e7201a46b129099deb55c69e3a5e7de21b970ad40916916a" Namespace="calico-system" Pod="calico-kube-controllers-847f5796b-ssmkv" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--847f5796b--ssmkv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--847f5796b--ssmkv-eth0", GenerateName:"calico-kube-controllers-847f5796b-", Namespace:"calico-system", SelfLink:"", UID:"2cb50f28-04bb-4f6a-ac09-21e960ec8f86", ResourceVersion:"860", Generation:0, CreationTimestamp:time.Date(2025, time.May, 16, 0, 14, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"847f5796b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-847f5796b-ssmkv", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali2b25870918e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 16 00:15:20.394868 containerd[1508]: 2025-05-16 00:15:20.091 [INFO][4668] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="2ea26774259486b1e7201a46b129099deb55c69e3a5e7de21b970ad40916916a" Namespace="calico-system" Pod="calico-kube-controllers-847f5796b-ssmkv" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--847f5796b--ssmkv-eth0" May 16 00:15:20.394868 containerd[1508]: 2025-05-16 00:15:20.091 [INFO][4668] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2b25870918e ContainerID="2ea26774259486b1e7201a46b129099deb55c69e3a5e7de21b970ad40916916a" Namespace="calico-system" Pod="calico-kube-controllers-847f5796b-ssmkv" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--847f5796b--ssmkv-eth0" May 16 00:15:20.394868 containerd[1508]: 2025-05-16 00:15:20.095 [INFO][4668] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2ea26774259486b1e7201a46b129099deb55c69e3a5e7de21b970ad40916916a" Namespace="calico-system" Pod="calico-kube-controllers-847f5796b-ssmkv" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--847f5796b--ssmkv-eth0" May 16 00:15:20.394868 containerd[1508]: 2025-05-16 00:15:20.096 [INFO][4668] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2ea26774259486b1e7201a46b129099deb55c69e3a5e7de21b970ad40916916a" Namespace="calico-system" Pod="calico-kube-controllers-847f5796b-ssmkv" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--847f5796b--ssmkv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--847f5796b--ssmkv-eth0", GenerateName:"calico-kube-controllers-847f5796b-", Namespace:"calico-system", SelfLink:"", UID:"2cb50f28-04bb-4f6a-ac09-21e960ec8f86", ResourceVersion:"860", Generation:0, CreationTimestamp:time.Date(2025, time.May, 16, 0, 14, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"847f5796b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2ea26774259486b1e7201a46b129099deb55c69e3a5e7de21b970ad40916916a", Pod:"calico-kube-controllers-847f5796b-ssmkv", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali2b25870918e", MAC:"42:3d:69:cb:45:ca", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 16 00:15:20.394868 containerd[1508]: 2025-05-16 00:15:20.387 [INFO][4668] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2ea26774259486b1e7201a46b129099deb55c69e3a5e7de21b970ad40916916a" Namespace="calico-system" Pod="calico-kube-controllers-847f5796b-ssmkv" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--847f5796b--ssmkv-eth0" May 16 00:15:20.472306 systemd-networkd[1420]: cali6655bec8d45: Link UP May 16 00:15:20.472729 systemd-networkd[1420]: cali6655bec8d45: Gained carrier May 16 00:15:20.624085 containerd[1508]: time="2025-05-16T00:15:20.624034527Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67fc6f48b5-qdbqj,Uid:0d77b5c7-302f-48ea-88fb-1eaa21477648,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"4691a9733e96b9970f1991d24e1759ccbbdcff5fa5c7917b79987ed953f7e5eb\"" May 16 00:15:20.625560 containerd[1508]: time="2025-05-16T00:15:20.625518285Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\"" May 16 00:15:20.631116 systemd[1]: Started sshd@16-10.0.0.81:22-10.0.0.1:58618.service - OpenSSH per-connection server daemon (10.0.0.1:58618). May 16 00:15:20.696954 sshd[4763]: Accepted publickey for core from 10.0.0.1 port 58618 ssh2: RSA SHA256:VClJ7K4P6c9/rg+rJGjy/BA6Vatbs9UvxKJcX8Q7ZgI May 16 00:15:20.755408 sshd-session[4763]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:15:20.761846 systemd-logind[1490]: New session 17 of user core. May 16 00:15:20.766672 systemd[1]: Started session-17.scope - Session 17 of User core. May 16 00:15:20.767623 containerd[1508]: 2025-05-16 00:15:19.651 [INFO][4652] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--hz4bz-eth0 csi-node-driver- calico-system cda0ce1d-952c-4f48-bbf6-c0ac3e9334ff 713 0 2025-05-16 00:14:32 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:78f6f74485 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-hz4bz eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali6655bec8d45 [] [] }} ContainerID="34f44247bde1b323feb09ac84f67e192d6401b85a4a8971628c8b444c527a760" Namespace="calico-system" Pod="csi-node-driver-hz4bz" WorkloadEndpoint="localhost-k8s-csi--node--driver--hz4bz-" May 16 00:15:20.767623 containerd[1508]: 2025-05-16 00:15:19.651 [INFO][4652] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="34f44247bde1b323feb09ac84f67e192d6401b85a4a8971628c8b444c527a760" Namespace="calico-system" Pod="csi-node-driver-hz4bz" WorkloadEndpoint="localhost-k8s-csi--node--driver--hz4bz-eth0" May 16 00:15:20.767623 containerd[1508]: 2025-05-16 00:15:19.680 [INFO][4706] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="34f44247bde1b323feb09ac84f67e192d6401b85a4a8971628c8b444c527a760" HandleID="k8s-pod-network.34f44247bde1b323feb09ac84f67e192d6401b85a4a8971628c8b444c527a760" Workload="localhost-k8s-csi--node--driver--hz4bz-eth0" May 16 00:15:20.767623 containerd[1508]: 2025-05-16 00:15:19.680 [INFO][4706] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="34f44247bde1b323feb09ac84f67e192d6401b85a4a8971628c8b444c527a760" HandleID="k8s-pod-network.34f44247bde1b323feb09ac84f67e192d6401b85a4a8971628c8b444c527a760" Workload="localhost-k8s-csi--node--driver--hz4bz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e5c0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-hz4bz", "timestamp":"2025-05-16 00:15:19.680226051 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 16 00:15:20.767623 containerd[1508]: 2025-05-16 00:15:19.680 [INFO][4706] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 16 00:15:20.767623 containerd[1508]: 2025-05-16 00:15:20.088 [INFO][4706] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 16 00:15:20.767623 containerd[1508]: 2025-05-16 00:15:20.088 [INFO][4706] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 16 00:15:20.767623 containerd[1508]: 2025-05-16 00:15:20.376 [INFO][4706] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.34f44247bde1b323feb09ac84f67e192d6401b85a4a8971628c8b444c527a760" host="localhost" May 16 00:15:20.767623 containerd[1508]: 2025-05-16 00:15:20.403 [INFO][4706] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" May 16 00:15:20.767623 containerd[1508]: 2025-05-16 00:15:20.409 [INFO][4706] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" May 16 00:15:20.767623 containerd[1508]: 2025-05-16 00:15:20.410 [INFO][4706] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 16 00:15:20.767623 containerd[1508]: 2025-05-16 00:15:20.413 [INFO][4706] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 16 00:15:20.767623 containerd[1508]: 2025-05-16 00:15:20.413 [INFO][4706] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.34f44247bde1b323feb09ac84f67e192d6401b85a4a8971628c8b444c527a760" host="localhost" May 16 00:15:20.767623 containerd[1508]: 2025-05-16 00:15:20.415 [INFO][4706] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.34f44247bde1b323feb09ac84f67e192d6401b85a4a8971628c8b444c527a760 May 16 00:15:20.767623 containerd[1508]: 2025-05-16 00:15:20.446 [INFO][4706] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.34f44247bde1b323feb09ac84f67e192d6401b85a4a8971628c8b444c527a760" host="localhost" May 16 00:15:20.767623 containerd[1508]: 2025-05-16 00:15:20.466 [INFO][4706] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.34f44247bde1b323feb09ac84f67e192d6401b85a4a8971628c8b444c527a760" host="localhost" May 16 00:15:20.767623 containerd[1508]: 2025-05-16 00:15:20.466 [INFO][4706] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.34f44247bde1b323feb09ac84f67e192d6401b85a4a8971628c8b444c527a760" host="localhost" May 16 00:15:20.767623 containerd[1508]: 2025-05-16 00:15:20.466 [INFO][4706] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 16 00:15:20.767623 containerd[1508]: 2025-05-16 00:15:20.466 [INFO][4706] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="34f44247bde1b323feb09ac84f67e192d6401b85a4a8971628c8b444c527a760" HandleID="k8s-pod-network.34f44247bde1b323feb09ac84f67e192d6401b85a4a8971628c8b444c527a760" Workload="localhost-k8s-csi--node--driver--hz4bz-eth0" May 16 00:15:20.768376 containerd[1508]: 2025-05-16 00:15:20.470 [INFO][4652] cni-plugin/k8s.go 418: Populated endpoint ContainerID="34f44247bde1b323feb09ac84f67e192d6401b85a4a8971628c8b444c527a760" Namespace="calico-system" Pod="csi-node-driver-hz4bz" WorkloadEndpoint="localhost-k8s-csi--node--driver--hz4bz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--hz4bz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"cda0ce1d-952c-4f48-bbf6-c0ac3e9334ff", ResourceVersion:"713", Generation:0, CreationTimestamp:time.Date(2025, time.May, 16, 0, 14, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78f6f74485", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-hz4bz", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali6655bec8d45", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 16 00:15:20.768376 containerd[1508]: 2025-05-16 00:15:20.470 [INFO][4652] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="34f44247bde1b323feb09ac84f67e192d6401b85a4a8971628c8b444c527a760" Namespace="calico-system" Pod="csi-node-driver-hz4bz" WorkloadEndpoint="localhost-k8s-csi--node--driver--hz4bz-eth0" May 16 00:15:20.768376 containerd[1508]: 2025-05-16 00:15:20.470 [INFO][4652] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6655bec8d45 ContainerID="34f44247bde1b323feb09ac84f67e192d6401b85a4a8971628c8b444c527a760" Namespace="calico-system" Pod="csi-node-driver-hz4bz" WorkloadEndpoint="localhost-k8s-csi--node--driver--hz4bz-eth0" May 16 00:15:20.768376 containerd[1508]: 2025-05-16 00:15:20.472 [INFO][4652] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="34f44247bde1b323feb09ac84f67e192d6401b85a4a8971628c8b444c527a760" Namespace="calico-system" Pod="csi-node-driver-hz4bz" WorkloadEndpoint="localhost-k8s-csi--node--driver--hz4bz-eth0" May 16 00:15:20.768376 containerd[1508]: 2025-05-16 00:15:20.473 [INFO][4652] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="34f44247bde1b323feb09ac84f67e192d6401b85a4a8971628c8b444c527a760" Namespace="calico-system" Pod="csi-node-driver-hz4bz" WorkloadEndpoint="localhost-k8s-csi--node--driver--hz4bz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--hz4bz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"cda0ce1d-952c-4f48-bbf6-c0ac3e9334ff", ResourceVersion:"713", Generation:0, CreationTimestamp:time.Date(2025, time.May, 16, 0, 14, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78f6f74485", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"34f44247bde1b323feb09ac84f67e192d6401b85a4a8971628c8b444c527a760", Pod:"csi-node-driver-hz4bz", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali6655bec8d45", MAC:"56:cc:ea:f2:cd:43", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 16 00:15:20.768376 containerd[1508]: 2025-05-16 00:15:20.763 [INFO][4652] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="34f44247bde1b323feb09ac84f67e192d6401b85a4a8971628c8b444c527a760" Namespace="calico-system" Pod="csi-node-driver-hz4bz" WorkloadEndpoint="localhost-k8s-csi--node--driver--hz4bz-eth0" May 16 00:15:20.932667 systemd-networkd[1420]: cali1bbbfb35ba6: Link UP May 16 00:15:20.933972 systemd-networkd[1420]: cali1bbbfb35ba6: Gained carrier May 16 00:15:20.943421 sshd[4772]: Connection closed by 10.0.0.1 port 58618 May 16 00:15:20.945426 sshd-session[4763]: pam_unix(sshd:session): session closed for user core May 16 00:15:20.950120 systemd[1]: sshd@16-10.0.0.81:22-10.0.0.1:58618.service: Deactivated successfully. May 16 00:15:20.952509 systemd[1]: session-17.scope: Deactivated successfully. May 16 00:15:20.953225 systemd-logind[1490]: Session 17 logged out. Waiting for processes to exit. May 16 00:15:20.954082 systemd-logind[1490]: Removed session 17. May 16 00:15:20.977788 containerd[1508]: 2025-05-16 00:15:20.398 [INFO][4729] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--78d55f7ddc--d79sc-eth0 goldmane-78d55f7ddc- calico-system b11c09e5-3cbb-468c-abbe-dc156fab2b5c 865 0 2025-05-16 00:14:31 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:78d55f7ddc projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-78d55f7ddc-d79sc eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali1bbbfb35ba6 [] [] }} ContainerID="111f9a3f0f01718ebcfd036de03154454b4b3e3c2317ed01e47f758f6929311e" Namespace="calico-system" Pod="goldmane-78d55f7ddc-d79sc" WorkloadEndpoint="localhost-k8s-goldmane--78d55f7ddc--d79sc-" May 16 00:15:20.977788 containerd[1508]: 2025-05-16 00:15:20.399 [INFO][4729] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="111f9a3f0f01718ebcfd036de03154454b4b3e3c2317ed01e47f758f6929311e" Namespace="calico-system" Pod="goldmane-78d55f7ddc-d79sc" WorkloadEndpoint="localhost-k8s-goldmane--78d55f7ddc--d79sc-eth0" May 16 00:15:20.977788 containerd[1508]: 2025-05-16 00:15:20.431 [INFO][4750] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="111f9a3f0f01718ebcfd036de03154454b4b3e3c2317ed01e47f758f6929311e" HandleID="k8s-pod-network.111f9a3f0f01718ebcfd036de03154454b4b3e3c2317ed01e47f758f6929311e" Workload="localhost-k8s-goldmane--78d55f7ddc--d79sc-eth0" May 16 00:15:20.977788 containerd[1508]: 2025-05-16 00:15:20.431 [INFO][4750] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="111f9a3f0f01718ebcfd036de03154454b4b3e3c2317ed01e47f758f6929311e" HandleID="k8s-pod-network.111f9a3f0f01718ebcfd036de03154454b4b3e3c2317ed01e47f758f6929311e" Workload="localhost-k8s-goldmane--78d55f7ddc--d79sc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000358c30), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-78d55f7ddc-d79sc", "timestamp":"2025-05-16 00:15:20.431791108 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 16 00:15:20.977788 containerd[1508]: 2025-05-16 00:15:20.432 [INFO][4750] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 16 00:15:20.977788 containerd[1508]: 2025-05-16 00:15:20.466 [INFO][4750] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 16 00:15:20.977788 containerd[1508]: 2025-05-16 00:15:20.466 [INFO][4750] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 16 00:15:20.977788 containerd[1508]: 2025-05-16 00:15:20.755 [INFO][4750] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.111f9a3f0f01718ebcfd036de03154454b4b3e3c2317ed01e47f758f6929311e" host="localhost" May 16 00:15:20.977788 containerd[1508]: 2025-05-16 00:15:20.802 [INFO][4750] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" May 16 00:15:20.977788 containerd[1508]: 2025-05-16 00:15:20.807 [INFO][4750] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" May 16 00:15:20.977788 containerd[1508]: 2025-05-16 00:15:20.808 [INFO][4750] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 16 00:15:20.977788 containerd[1508]: 2025-05-16 00:15:20.811 [INFO][4750] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 16 00:15:20.977788 containerd[1508]: 2025-05-16 00:15:20.811 [INFO][4750] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.111f9a3f0f01718ebcfd036de03154454b4b3e3c2317ed01e47f758f6929311e" host="localhost" May 16 00:15:20.977788 containerd[1508]: 2025-05-16 00:15:20.814 [INFO][4750] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.111f9a3f0f01718ebcfd036de03154454b4b3e3c2317ed01e47f758f6929311e May 16 00:15:20.977788 containerd[1508]: 2025-05-16 00:15:20.836 [INFO][4750] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.111f9a3f0f01718ebcfd036de03154454b4b3e3c2317ed01e47f758f6929311e" host="localhost" May 16 00:15:20.977788 containerd[1508]: 2025-05-16 00:15:20.926 [INFO][4750] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.111f9a3f0f01718ebcfd036de03154454b4b3e3c2317ed01e47f758f6929311e" host="localhost" May 16 00:15:20.977788 containerd[1508]: 2025-05-16 00:15:20.926 [INFO][4750] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.111f9a3f0f01718ebcfd036de03154454b4b3e3c2317ed01e47f758f6929311e" host="localhost" May 16 00:15:20.977788 containerd[1508]: 2025-05-16 00:15:20.926 [INFO][4750] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 16 00:15:20.977788 containerd[1508]: 2025-05-16 00:15:20.926 [INFO][4750] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="111f9a3f0f01718ebcfd036de03154454b4b3e3c2317ed01e47f758f6929311e" HandleID="k8s-pod-network.111f9a3f0f01718ebcfd036de03154454b4b3e3c2317ed01e47f758f6929311e" Workload="localhost-k8s-goldmane--78d55f7ddc--d79sc-eth0" May 16 00:15:20.978704 containerd[1508]: 2025-05-16 00:15:20.929 [INFO][4729] cni-plugin/k8s.go 418: Populated endpoint ContainerID="111f9a3f0f01718ebcfd036de03154454b4b3e3c2317ed01e47f758f6929311e" Namespace="calico-system" Pod="goldmane-78d55f7ddc-d79sc" WorkloadEndpoint="localhost-k8s-goldmane--78d55f7ddc--d79sc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--78d55f7ddc--d79sc-eth0", GenerateName:"goldmane-78d55f7ddc-", Namespace:"calico-system", SelfLink:"", UID:"b11c09e5-3cbb-468c-abbe-dc156fab2b5c", ResourceVersion:"865", Generation:0, CreationTimestamp:time.Date(2025, time.May, 16, 0, 14, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"78d55f7ddc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-78d55f7ddc-d79sc", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali1bbbfb35ba6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 16 00:15:20.978704 containerd[1508]: 2025-05-16 00:15:20.929 [INFO][4729] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="111f9a3f0f01718ebcfd036de03154454b4b3e3c2317ed01e47f758f6929311e" Namespace="calico-system" Pod="goldmane-78d55f7ddc-d79sc" WorkloadEndpoint="localhost-k8s-goldmane--78d55f7ddc--d79sc-eth0" May 16 00:15:20.978704 containerd[1508]: 2025-05-16 00:15:20.929 [INFO][4729] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1bbbfb35ba6 ContainerID="111f9a3f0f01718ebcfd036de03154454b4b3e3c2317ed01e47f758f6929311e" Namespace="calico-system" Pod="goldmane-78d55f7ddc-d79sc" WorkloadEndpoint="localhost-k8s-goldmane--78d55f7ddc--d79sc-eth0" May 16 00:15:20.978704 containerd[1508]: 2025-05-16 00:15:20.933 [INFO][4729] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="111f9a3f0f01718ebcfd036de03154454b4b3e3c2317ed01e47f758f6929311e" Namespace="calico-system" Pod="goldmane-78d55f7ddc-d79sc" WorkloadEndpoint="localhost-k8s-goldmane--78d55f7ddc--d79sc-eth0" May 16 00:15:20.978704 containerd[1508]: 2025-05-16 00:15:20.938 [INFO][4729] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="111f9a3f0f01718ebcfd036de03154454b4b3e3c2317ed01e47f758f6929311e" Namespace="calico-system" Pod="goldmane-78d55f7ddc-d79sc" WorkloadEndpoint="localhost-k8s-goldmane--78d55f7ddc--d79sc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--78d55f7ddc--d79sc-eth0", GenerateName:"goldmane-78d55f7ddc-", Namespace:"calico-system", SelfLink:"", UID:"b11c09e5-3cbb-468c-abbe-dc156fab2b5c", ResourceVersion:"865", Generation:0, CreationTimestamp:time.Date(2025, time.May, 16, 0, 14, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"78d55f7ddc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"111f9a3f0f01718ebcfd036de03154454b4b3e3c2317ed01e47f758f6929311e", Pod:"goldmane-78d55f7ddc-d79sc", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali1bbbfb35ba6", MAC:"f2:92:da:5d:2e:95", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 16 00:15:20.978704 containerd[1508]: 2025-05-16 00:15:20.974 [INFO][4729] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="111f9a3f0f01718ebcfd036de03154454b4b3e3c2317ed01e47f758f6929311e" Namespace="calico-system" Pod="goldmane-78d55f7ddc-d79sc" WorkloadEndpoint="localhost-k8s-goldmane--78d55f7ddc--d79sc-eth0" May 16 00:15:21.310402 systemd-networkd[1420]: cali57e62a5a41f: Link UP May 16 00:15:21.311079 systemd-networkd[1420]: cali57e62a5a41f: Gained carrier May 16 00:15:21.342268 containerd[1508]: time="2025-05-16T00:15:21.342209402Z" level=info msg="connecting to shim 0ed54bf2263aa8026f40d8b1d1fcd461e229557d2d95ecad9dcea49ccbb54697" address="unix:///run/containerd/s/99efd46a709c903d443ccb4f82bc8685d19e1166da5515ff0ba8ba07bcd193e0" namespace=k8s.io protocol=ttrpc version=3 May 16 00:15:21.371987 containerd[1508]: 2025-05-16 00:15:20.934 [INFO][4783] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--67fc6f48b5--rr84h-eth0 calico-apiserver-67fc6f48b5- calico-apiserver 25e056e5-f7d2-462c-bd3a-fd78b9601b08 867 0 2025-05-16 00:14:29 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:67fc6f48b5 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-67fc6f48b5-rr84h eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali57e62a5a41f [] [] }} ContainerID="b584a8b85c8065800172775cfd3e51810267556e61ef4b8548488d5c03aba69b" Namespace="calico-apiserver" Pod="calico-apiserver-67fc6f48b5-rr84h" WorkloadEndpoint="localhost-k8s-calico--apiserver--67fc6f48b5--rr84h-" May 16 00:15:21.371987 containerd[1508]: 2025-05-16 00:15:20.934 [INFO][4783] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b584a8b85c8065800172775cfd3e51810267556e61ef4b8548488d5c03aba69b" Namespace="calico-apiserver" Pod="calico-apiserver-67fc6f48b5-rr84h" WorkloadEndpoint="localhost-k8s-calico--apiserver--67fc6f48b5--rr84h-eth0" May 16 00:15:21.371987 containerd[1508]: 2025-05-16 00:15:21.013 [INFO][4807] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b584a8b85c8065800172775cfd3e51810267556e61ef4b8548488d5c03aba69b" HandleID="k8s-pod-network.b584a8b85c8065800172775cfd3e51810267556e61ef4b8548488d5c03aba69b" Workload="localhost-k8s-calico--apiserver--67fc6f48b5--rr84h-eth0" May 16 00:15:21.371987 containerd[1508]: 2025-05-16 00:15:21.013 [INFO][4807] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b584a8b85c8065800172775cfd3e51810267556e61ef4b8548488d5c03aba69b" HandleID="k8s-pod-network.b584a8b85c8065800172775cfd3e51810267556e61ef4b8548488d5c03aba69b" Workload="localhost-k8s-calico--apiserver--67fc6f48b5--rr84h-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000235620), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-67fc6f48b5-rr84h", "timestamp":"2025-05-16 00:15:21.013332989 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 16 00:15:21.371987 containerd[1508]: 2025-05-16 00:15:21.013 [INFO][4807] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 16 00:15:21.371987 containerd[1508]: 2025-05-16 00:15:21.013 [INFO][4807] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 16 00:15:21.371987 containerd[1508]: 2025-05-16 00:15:21.013 [INFO][4807] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 16 00:15:21.371987 containerd[1508]: 2025-05-16 00:15:21.136 [INFO][4807] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b584a8b85c8065800172775cfd3e51810267556e61ef4b8548488d5c03aba69b" host="localhost" May 16 00:15:21.371987 containerd[1508]: 2025-05-16 00:15:21.155 [INFO][4807] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" May 16 00:15:21.371987 containerd[1508]: 2025-05-16 00:15:21.161 [INFO][4807] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" May 16 00:15:21.371987 containerd[1508]: 2025-05-16 00:15:21.163 [INFO][4807] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 16 00:15:21.371987 containerd[1508]: 2025-05-16 00:15:21.166 [INFO][4807] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 16 00:15:21.371987 containerd[1508]: 2025-05-16 00:15:21.166 [INFO][4807] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b584a8b85c8065800172775cfd3e51810267556e61ef4b8548488d5c03aba69b" host="localhost" May 16 00:15:21.371987 containerd[1508]: 2025-05-16 00:15:21.168 [INFO][4807] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.b584a8b85c8065800172775cfd3e51810267556e61ef4b8548488d5c03aba69b May 16 00:15:21.371987 containerd[1508]: 2025-05-16 00:15:21.203 [INFO][4807] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b584a8b85c8065800172775cfd3e51810267556e61ef4b8548488d5c03aba69b" host="localhost" May 16 00:15:21.371987 containerd[1508]: 2025-05-16 00:15:21.303 [INFO][4807] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.b584a8b85c8065800172775cfd3e51810267556e61ef4b8548488d5c03aba69b" host="localhost" May 16 00:15:21.371987 containerd[1508]: 2025-05-16 00:15:21.303 [INFO][4807] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.b584a8b85c8065800172775cfd3e51810267556e61ef4b8548488d5c03aba69b" host="localhost" May 16 00:15:21.371987 containerd[1508]: 2025-05-16 00:15:21.303 [INFO][4807] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 16 00:15:21.371987 containerd[1508]: 2025-05-16 00:15:21.303 [INFO][4807] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="b584a8b85c8065800172775cfd3e51810267556e61ef4b8548488d5c03aba69b" HandleID="k8s-pod-network.b584a8b85c8065800172775cfd3e51810267556e61ef4b8548488d5c03aba69b" Workload="localhost-k8s-calico--apiserver--67fc6f48b5--rr84h-eth0" May 16 00:15:21.372596 containerd[1508]: 2025-05-16 00:15:21.307 [INFO][4783] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b584a8b85c8065800172775cfd3e51810267556e61ef4b8548488d5c03aba69b" Namespace="calico-apiserver" Pod="calico-apiserver-67fc6f48b5-rr84h" WorkloadEndpoint="localhost-k8s-calico--apiserver--67fc6f48b5--rr84h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--67fc6f48b5--rr84h-eth0", GenerateName:"calico-apiserver-67fc6f48b5-", Namespace:"calico-apiserver", SelfLink:"", UID:"25e056e5-f7d2-462c-bd3a-fd78b9601b08", ResourceVersion:"867", Generation:0, CreationTimestamp:time.Date(2025, time.May, 16, 0, 14, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"67fc6f48b5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-67fc6f48b5-rr84h", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali57e62a5a41f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 16 00:15:21.372596 containerd[1508]: 2025-05-16 00:15:21.307 [INFO][4783] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="b584a8b85c8065800172775cfd3e51810267556e61ef4b8548488d5c03aba69b" Namespace="calico-apiserver" Pod="calico-apiserver-67fc6f48b5-rr84h" WorkloadEndpoint="localhost-k8s-calico--apiserver--67fc6f48b5--rr84h-eth0" May 16 00:15:21.372596 containerd[1508]: 2025-05-16 00:15:21.307 [INFO][4783] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali57e62a5a41f ContainerID="b584a8b85c8065800172775cfd3e51810267556e61ef4b8548488d5c03aba69b" Namespace="calico-apiserver" Pod="calico-apiserver-67fc6f48b5-rr84h" WorkloadEndpoint="localhost-k8s-calico--apiserver--67fc6f48b5--rr84h-eth0" May 16 00:15:21.372596 containerd[1508]: 2025-05-16 00:15:21.312 [INFO][4783] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b584a8b85c8065800172775cfd3e51810267556e61ef4b8548488d5c03aba69b" Namespace="calico-apiserver" Pod="calico-apiserver-67fc6f48b5-rr84h" WorkloadEndpoint="localhost-k8s-calico--apiserver--67fc6f48b5--rr84h-eth0" May 16 00:15:21.372596 containerd[1508]: 2025-05-16 00:15:21.313 [INFO][4783] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b584a8b85c8065800172775cfd3e51810267556e61ef4b8548488d5c03aba69b" Namespace="calico-apiserver" Pod="calico-apiserver-67fc6f48b5-rr84h" WorkloadEndpoint="localhost-k8s-calico--apiserver--67fc6f48b5--rr84h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--67fc6f48b5--rr84h-eth0", GenerateName:"calico-apiserver-67fc6f48b5-", Namespace:"calico-apiserver", SelfLink:"", UID:"25e056e5-f7d2-462c-bd3a-fd78b9601b08", ResourceVersion:"867", Generation:0, CreationTimestamp:time.Date(2025, time.May, 16, 0, 14, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"67fc6f48b5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b584a8b85c8065800172775cfd3e51810267556e61ef4b8548488d5c03aba69b", Pod:"calico-apiserver-67fc6f48b5-rr84h", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali57e62a5a41f", MAC:"d2:f0:76:0f:2a:79", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 16 00:15:21.372596 containerd[1508]: 2025-05-16 00:15:21.368 [INFO][4783] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b584a8b85c8065800172775cfd3e51810267556e61ef4b8548488d5c03aba69b" Namespace="calico-apiserver" Pod="calico-apiserver-67fc6f48b5-rr84h" WorkloadEndpoint="localhost-k8s-calico--apiserver--67fc6f48b5--rr84h-eth0" May 16 00:15:21.386736 systemd[1]: Started cri-containerd-0ed54bf2263aa8026f40d8b1d1fcd461e229557d2d95ecad9dcea49ccbb54697.scope - libcontainer container 0ed54bf2263aa8026f40d8b1d1fcd461e229557d2d95ecad9dcea49ccbb54697. May 16 00:15:21.398682 systemd-networkd[1420]: cali2b25870918e: Gained IPv6LL May 16 00:15:21.405824 systemd-resolved[1349]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 16 00:15:21.727123 containerd[1508]: time="2025-05-16T00:15:21.727052295Z" level=info msg="connecting to shim 684f8c64c8a24c52e0af5bf32578c24ac80c01a39fa072f1c9a28b54fd7b75ef" address="unix:///run/containerd/s/123f50eea4e01d1dd4b41bccf31e1f258caf31e67e7216b716041d7ed7d8ba5f" namespace=k8s.io protocol=ttrpc version=3 May 16 00:15:21.767688 systemd[1]: Started cri-containerd-684f8c64c8a24c52e0af5bf32578c24ac80c01a39fa072f1c9a28b54fd7b75ef.scope - libcontainer container 684f8c64c8a24c52e0af5bf32578c24ac80c01a39fa072f1c9a28b54fd7b75ef. May 16 00:15:21.782745 systemd-resolved[1349]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 16 00:15:21.806774 containerd[1508]: time="2025-05-16T00:15:21.806727995Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-7zxch,Uid:5e130f73-1c5a-4996-b6b9-e20047051b8e,Namespace:kube-system,Attempt:0,} returns sandbox id \"0ed54bf2263aa8026f40d8b1d1fcd461e229557d2d95ecad9dcea49ccbb54697\"" May 16 00:15:21.808059 kubelet[2691]: E0516 00:15:21.807584 2691 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:15:21.810214 containerd[1508]: time="2025-05-16T00:15:21.810149834Z" level=info msg="CreateContainer within sandbox \"0ed54bf2263aa8026f40d8b1d1fcd461e229557d2d95ecad9dcea49ccbb54697\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 16 00:15:22.102680 systemd-networkd[1420]: cali6655bec8d45: Gained IPv6LL May 16 00:15:22.230335 containerd[1508]: time="2025-05-16T00:15:22.230281613Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-pm6bh,Uid:6715f7be-e888-4ac0-8c1b-fde56f478f63,Namespace:kube-system,Attempt:0,} returns sandbox id \"684f8c64c8a24c52e0af5bf32578c24ac80c01a39fa072f1c9a28b54fd7b75ef\"" May 16 00:15:22.231152 kubelet[2691]: E0516 00:15:22.231119 2691 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:15:22.232715 containerd[1508]: time="2025-05-16T00:15:22.232677290Z" level=info msg="CreateContainer within sandbox \"684f8c64c8a24c52e0af5bf32578c24ac80c01a39fa072f1c9a28b54fd7b75ef\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 16 00:15:22.334627 containerd[1508]: time="2025-05-16T00:15:22.334557389Z" level=info msg="connecting to shim 2ea26774259486b1e7201a46b129099deb55c69e3a5e7de21b970ad40916916a" address="unix:///run/containerd/s/0f497164120df3ad64bbf6d9af0a6333e356cda3e26553a7bba30ab856155568" namespace=k8s.io protocol=ttrpc version=3 May 16 00:15:22.360622 systemd[1]: Started cri-containerd-2ea26774259486b1e7201a46b129099deb55c69e3a5e7de21b970ad40916916a.scope - libcontainer container 2ea26774259486b1e7201a46b129099deb55c69e3a5e7de21b970ad40916916a. May 16 00:15:22.375158 systemd-resolved[1349]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 16 00:15:22.423518 systemd-networkd[1420]: cali1bbbfb35ba6: Gained IPv6LL May 16 00:15:22.435222 containerd[1508]: time="2025-05-16T00:15:22.435158216Z" level=info msg="connecting to shim 34f44247bde1b323feb09ac84f67e192d6401b85a4a8971628c8b444c527a760" address="unix:///run/containerd/s/0e28885da7537c98de420dba532d7f60aff044f742c2a80117b47e17c8dfdaa9" namespace=k8s.io protocol=ttrpc version=3 May 16 00:15:22.465558 systemd[1]: Started cri-containerd-34f44247bde1b323feb09ac84f67e192d6401b85a4a8971628c8b444c527a760.scope - libcontainer container 34f44247bde1b323feb09ac84f67e192d6401b85a4a8971628c8b444c527a760. May 16 00:15:22.480262 systemd-resolved[1349]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 16 00:15:22.522338 containerd[1508]: time="2025-05-16T00:15:22.522280238Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-847f5796b-ssmkv,Uid:2cb50f28-04bb-4f6a-ac09-21e960ec8f86,Namespace:calico-system,Attempt:0,} returns sandbox id \"2ea26774259486b1e7201a46b129099deb55c69e3a5e7de21b970ad40916916a\"" May 16 00:15:22.667257 containerd[1508]: time="2025-05-16T00:15:22.667067391Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hz4bz,Uid:cda0ce1d-952c-4f48-bbf6-c0ac3e9334ff,Namespace:calico-system,Attempt:0,} returns sandbox id \"34f44247bde1b323feb09ac84f67e192d6401b85a4a8971628c8b444c527a760\"" May 16 00:15:22.699658 containerd[1508]: time="2025-05-16T00:15:22.698992969Z" level=info msg="connecting to shim 111f9a3f0f01718ebcfd036de03154454b4b3e3c2317ed01e47f758f6929311e" address="unix:///run/containerd/s/8457ea2bfee2c251c4a33007320ae7d242f90a111f17e175e3a31ce6ad6aaf8e" namespace=k8s.io protocol=ttrpc version=3 May 16 00:15:22.729625 systemd[1]: Started cri-containerd-111f9a3f0f01718ebcfd036de03154454b4b3e3c2317ed01e47f758f6929311e.scope - libcontainer container 111f9a3f0f01718ebcfd036de03154454b4b3e3c2317ed01e47f758f6929311e. May 16 00:15:22.730640 containerd[1508]: time="2025-05-16T00:15:22.729908147Z" level=info msg="Container 2bf4acb98f50142218777ba5738b63f2a99de5540fd67ba41f1d0e79d0deebaa: CDI devices from CRI Config.CDIDevices: []" May 16 00:15:22.766527 systemd-resolved[1349]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 16 00:15:22.998556 systemd-networkd[1420]: cali57e62a5a41f: Gained IPv6LL May 16 00:15:23.287656 containerd[1508]: time="2025-05-16T00:15:23.287392009Z" level=info msg="Container ab70b3189d13b7465c9c808385d2dcae784fccc08e555b83964e5e1ba2ce1f2d: CDI devices from CRI Config.CDIDevices: []" May 16 00:15:23.427245 containerd[1508]: time="2025-05-16T00:15:23.427151814Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-78d55f7ddc-d79sc,Uid:b11c09e5-3cbb-468c-abbe-dc156fab2b5c,Namespace:calico-system,Attempt:0,} returns sandbox id \"111f9a3f0f01718ebcfd036de03154454b4b3e3c2317ed01e47f758f6929311e\"" May 16 00:15:23.482268 containerd[1508]: time="2025-05-16T00:15:23.482201779Z" level=info msg="connecting to shim b584a8b85c8065800172775cfd3e51810267556e61ef4b8548488d5c03aba69b" address="unix:///run/containerd/s/092f6fc59feee0dcc30e0258d2249f4bd6c4b91fb361c01dded764dbed55b46f" namespace=k8s.io protocol=ttrpc version=3 May 16 00:15:23.512584 systemd[1]: Started cri-containerd-b584a8b85c8065800172775cfd3e51810267556e61ef4b8548488d5c03aba69b.scope - libcontainer container b584a8b85c8065800172775cfd3e51810267556e61ef4b8548488d5c03aba69b. May 16 00:15:23.527188 systemd-resolved[1349]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 16 00:15:23.858164 containerd[1508]: time="2025-05-16T00:15:23.858100006Z" level=info msg="CreateContainer within sandbox \"0ed54bf2263aa8026f40d8b1d1fcd461e229557d2d95ecad9dcea49ccbb54697\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2bf4acb98f50142218777ba5738b63f2a99de5540fd67ba41f1d0e79d0deebaa\"" May 16 00:15:23.858852 containerd[1508]: time="2025-05-16T00:15:23.858809092Z" level=info msg="StartContainer for \"2bf4acb98f50142218777ba5738b63f2a99de5540fd67ba41f1d0e79d0deebaa\"" May 16 00:15:23.859959 containerd[1508]: time="2025-05-16T00:15:23.859912352Z" level=info msg="connecting to shim 2bf4acb98f50142218777ba5738b63f2a99de5540fd67ba41f1d0e79d0deebaa" address="unix:///run/containerd/s/99efd46a709c903d443ccb4f82bc8685d19e1166da5515ff0ba8ba07bcd193e0" protocol=ttrpc version=3 May 16 00:15:23.868454 containerd[1508]: time="2025-05-16T00:15:23.866777256Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67fc6f48b5-rr84h,Uid:25e056e5-f7d2-462c-bd3a-fd78b9601b08,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"b584a8b85c8065800172775cfd3e51810267556e61ef4b8548488d5c03aba69b\"" May 16 00:15:23.889635 systemd[1]: Started cri-containerd-2bf4acb98f50142218777ba5738b63f2a99de5540fd67ba41f1d0e79d0deebaa.scope - libcontainer container 2bf4acb98f50142218777ba5738b63f2a99de5540fd67ba41f1d0e79d0deebaa. May 16 00:15:23.976254 containerd[1508]: time="2025-05-16T00:15:23.976153964Z" level=info msg="CreateContainer within sandbox \"684f8c64c8a24c52e0af5bf32578c24ac80c01a39fa072f1c9a28b54fd7b75ef\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ab70b3189d13b7465c9c808385d2dcae784fccc08e555b83964e5e1ba2ce1f2d\"" May 16 00:15:23.977227 containerd[1508]: time="2025-05-16T00:15:23.977073472Z" level=info msg="StartContainer for \"ab70b3189d13b7465c9c808385d2dcae784fccc08e555b83964e5e1ba2ce1f2d\"" May 16 00:15:23.978796 containerd[1508]: time="2025-05-16T00:15:23.978623214Z" level=info msg="connecting to shim ab70b3189d13b7465c9c808385d2dcae784fccc08e555b83964e5e1ba2ce1f2d" address="unix:///run/containerd/s/123f50eea4e01d1dd4b41bccf31e1f258caf31e67e7216b716041d7ed7d8ba5f" protocol=ttrpc version=3 May 16 00:15:24.012612 systemd[1]: Started cri-containerd-ab70b3189d13b7465c9c808385d2dcae784fccc08e555b83964e5e1ba2ce1f2d.scope - libcontainer container ab70b3189d13b7465c9c808385d2dcae784fccc08e555b83964e5e1ba2ce1f2d. May 16 00:15:24.148883 containerd[1508]: time="2025-05-16T00:15:24.148730811Z" level=info msg="StartContainer for \"2bf4acb98f50142218777ba5738b63f2a99de5540fd67ba41f1d0e79d0deebaa\" returns successfully" May 16 00:15:24.148883 containerd[1508]: time="2025-05-16T00:15:24.148866761Z" level=info msg="StartContainer for \"ab70b3189d13b7465c9c808385d2dcae784fccc08e555b83964e5e1ba2ce1f2d\" returns successfully" May 16 00:15:24.978292 kubelet[2691]: E0516 00:15:24.978220 2691 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:15:24.980057 kubelet[2691]: E0516 00:15:24.980039 2691 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:15:25.251097 kubelet[2691]: I0516 00:15:25.249882 2691 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-pm6bh" podStartSLOduration=73.249863823 podStartE2EDuration="1m13.249863823s" podCreationTimestamp="2025-05-16 00:14:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-16 00:15:25.249732601 +0000 UTC m=+79.190186343" watchObservedRunningTime="2025-05-16 00:15:25.249863823 +0000 UTC m=+79.190317565" May 16 00:15:25.457690 kubelet[2691]: I0516 00:15:25.457637 2691 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-7zxch" podStartSLOduration=73.457618283 podStartE2EDuration="1m13.457618283s" podCreationTimestamp="2025-05-16 00:14:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-16 00:15:25.457473406 +0000 UTC m=+79.397927148" watchObservedRunningTime="2025-05-16 00:15:25.457618283 +0000 UTC m=+79.398072025" May 16 00:15:25.958294 systemd[1]: Started sshd@17-10.0.0.81:22-10.0.0.1:58634.service - OpenSSH per-connection server daemon (10.0.0.1:58634). May 16 00:15:25.982471 kubelet[2691]: E0516 00:15:25.982441 2691 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:15:25.983563 kubelet[2691]: E0516 00:15:25.983489 2691 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:15:26.016250 sshd[5183]: Accepted publickey for core from 10.0.0.1 port 58634 ssh2: RSA SHA256:VClJ7K4P6c9/rg+rJGjy/BA6Vatbs9UvxKJcX8Q7ZgI May 16 00:15:26.018256 sshd-session[5183]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:15:26.022695 systemd-logind[1490]: New session 18 of user core. May 16 00:15:26.032497 systemd[1]: Started session-18.scope - Session 18 of User core. May 16 00:15:26.327456 sshd[5185]: Connection closed by 10.0.0.1 port 58634 May 16 00:15:26.327736 sshd-session[5183]: pam_unix(sshd:session): session closed for user core May 16 00:15:26.332787 systemd[1]: sshd@17-10.0.0.81:22-10.0.0.1:58634.service: Deactivated successfully. May 16 00:15:26.335894 systemd[1]: session-18.scope: Deactivated successfully. May 16 00:15:26.336856 systemd-logind[1490]: Session 18 logged out. Waiting for processes to exit. May 16 00:15:26.338007 systemd-logind[1490]: Removed session 18. May 16 00:15:26.985304 kubelet[2691]: E0516 00:15:26.985269 2691 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:15:26.986007 kubelet[2691]: E0516 00:15:26.985356 2691 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:15:27.149686 kubelet[2691]: E0516 00:15:27.149635 2691 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:15:28.415716 containerd[1508]: time="2025-05-16T00:15:28.415644208Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:15:28.420157 containerd[1508]: time="2025-05-16T00:15:28.420118529Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.0: active requests=0, bytes read=47252431" May 16 00:15:28.429871 containerd[1508]: time="2025-05-16T00:15:28.429845316Z" level=info msg="ImageCreate event name:\"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:15:28.439446 containerd[1508]: time="2025-05-16T00:15:28.439413157Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ad7d2e76f15777636c5d91c108d7655659b38fe8970255050ffa51223eb96ff4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:15:28.440034 containerd[1508]: time="2025-05-16T00:15:28.440007659Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" with image id \"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ad7d2e76f15777636c5d91c108d7655659b38fe8970255050ffa51223eb96ff4\", size \"48745150\" in 7.814444948s" May 16 00:15:28.440090 containerd[1508]: time="2025-05-16T00:15:28.440040381Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" returns image reference \"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\"" May 16 00:15:28.440833 containerd[1508]: time="2025-05-16T00:15:28.440811321Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\"" May 16 00:15:28.442735 containerd[1508]: time="2025-05-16T00:15:28.442710639Z" level=info msg="CreateContainer within sandbox \"4691a9733e96b9970f1991d24e1759ccbbdcff5fa5c7917b79987ed953f7e5eb\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 16 00:15:28.495792 containerd[1508]: time="2025-05-16T00:15:28.495732095Z" level=info msg="Container e1a61f0f4d098ddd23a3f521f88004e93d680e5bff9d3a204f45d57f6763487f: CDI devices from CRI Config.CDIDevices: []" May 16 00:15:28.560508 containerd[1508]: time="2025-05-16T00:15:28.560469748Z" level=info msg="CreateContainer within sandbox \"4691a9733e96b9970f1991d24e1759ccbbdcff5fa5c7917b79987ed953f7e5eb\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"e1a61f0f4d098ddd23a3f521f88004e93d680e5bff9d3a204f45d57f6763487f\"" May 16 00:15:28.561002 containerd[1508]: time="2025-05-16T00:15:28.560974077Z" level=info msg="StartContainer for \"e1a61f0f4d098ddd23a3f521f88004e93d680e5bff9d3a204f45d57f6763487f\"" May 16 00:15:28.562271 containerd[1508]: time="2025-05-16T00:15:28.562221822Z" level=info msg="connecting to shim e1a61f0f4d098ddd23a3f521f88004e93d680e5bff9d3a204f45d57f6763487f" address="unix:///run/containerd/s/1898c6725149d0e97c122ce7b09ad9015a14438f662a39a027be002aebfee9e5" protocol=ttrpc version=3 May 16 00:15:28.584500 systemd[1]: Started cri-containerd-e1a61f0f4d098ddd23a3f521f88004e93d680e5bff9d3a204f45d57f6763487f.scope - libcontainer container e1a61f0f4d098ddd23a3f521f88004e93d680e5bff9d3a204f45d57f6763487f. May 16 00:15:28.688021 containerd[1508]: time="2025-05-16T00:15:28.687920928Z" level=info msg="StartContainer for \"e1a61f0f4d098ddd23a3f521f88004e93d680e5bff9d3a204f45d57f6763487f\" returns successfully" May 16 00:15:29.016068 kubelet[2691]: I0516 00:15:29.015988 2691 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-67fc6f48b5-qdbqj" podStartSLOduration=52.200540566 podStartE2EDuration="1m0.015966013s" podCreationTimestamp="2025-05-16 00:14:29 +0000 UTC" firstStartedPulling="2025-05-16 00:15:20.62527965 +0000 UTC m=+74.565733393" lastFinishedPulling="2025-05-16 00:15:28.440705098 +0000 UTC m=+82.381158840" observedRunningTime="2025-05-16 00:15:29.01580812 +0000 UTC m=+82.956261862" watchObservedRunningTime="2025-05-16 00:15:29.015966013 +0000 UTC m=+82.956419755" May 16 00:15:29.149875 kubelet[2691]: E0516 00:15:29.149822 2691 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:15:29.993607 kubelet[2691]: I0516 00:15:29.993573 2691 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 16 00:15:31.348531 systemd[1]: Started sshd@18-10.0.0.81:22-10.0.0.1:52190.service - OpenSSH per-connection server daemon (10.0.0.1:52190). May 16 00:15:31.426331 sshd[5259]: Accepted publickey for core from 10.0.0.1 port 52190 ssh2: RSA SHA256:VClJ7K4P6c9/rg+rJGjy/BA6Vatbs9UvxKJcX8Q7ZgI May 16 00:15:31.428768 sshd-session[5259]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:15:31.436354 systemd-logind[1490]: New session 19 of user core. May 16 00:15:31.444566 systemd[1]: Started session-19.scope - Session 19 of User core. May 16 00:15:32.707810 kubelet[2691]: I0516 00:15:32.707776 2691 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 16 00:15:32.809139 sshd[5263]: Connection closed by 10.0.0.1 port 52190 May 16 00:15:32.810774 sshd-session[5259]: pam_unix(sshd:session): session closed for user core May 16 00:15:32.815785 systemd[1]: sshd@18-10.0.0.81:22-10.0.0.1:52190.service: Deactivated successfully. May 16 00:15:32.817904 systemd[1]: session-19.scope: Deactivated successfully. May 16 00:15:32.823505 systemd-logind[1490]: Session 19 logged out. Waiting for processes to exit. May 16 00:15:32.825062 systemd-logind[1490]: Removed session 19. May 16 00:15:34.149722 kubelet[2691]: E0516 00:15:34.149674 2691 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:15:34.638419 containerd[1508]: time="2025-05-16T00:15:34.638223158Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:15:34.689593 containerd[1508]: time="2025-05-16T00:15:34.689447823Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.0: active requests=0, bytes read=51178512" May 16 00:15:34.792546 containerd[1508]: time="2025-05-16T00:15:34.792485728Z" level=info msg="ImageCreate event name:\"sha256:094053209304a3d20e6561c18d37ac2dc4c7fbb68c1579d9864c303edebffa50\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:15:35.031047 containerd[1508]: time="2025-05-16T00:15:35.030989341Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:eb5bc5c9e7a71f1d8ea69bbcc8e54b84fb7ec1e32d919c8b148f80b770f20182\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:15:35.031914 containerd[1508]: time="2025-05-16T00:15:35.031870310Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\" with image id \"sha256:094053209304a3d20e6561c18d37ac2dc4c7fbb68c1579d9864c303edebffa50\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:eb5bc5c9e7a71f1d8ea69bbcc8e54b84fb7ec1e32d919c8b148f80b770f20182\", size \"52671183\" in 6.591028901s" May 16 00:15:35.031996 containerd[1508]: time="2025-05-16T00:15:35.031923142Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\" returns image reference \"sha256:094053209304a3d20e6561c18d37ac2dc4c7fbb68c1579d9864c303edebffa50\"" May 16 00:15:35.034544 containerd[1508]: time="2025-05-16T00:15:35.034405943Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.0\"" May 16 00:15:35.045202 containerd[1508]: time="2025-05-16T00:15:35.045153543Z" level=info msg="CreateContainer within sandbox \"2ea26774259486b1e7201a46b129099deb55c69e3a5e7de21b970ad40916916a\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" May 16 00:15:35.692877 containerd[1508]: time="2025-05-16T00:15:35.692831294Z" level=info msg="Container 22fba508d6ad4810b35884ae00b65525488b70a2470bc86c4e4eacf3416fc0fd: CDI devices from CRI Config.CDIDevices: []" May 16 00:15:36.052074 containerd[1508]: time="2025-05-16T00:15:36.052022319Z" level=info msg="CreateContainer within sandbox \"2ea26774259486b1e7201a46b129099deb55c69e3a5e7de21b970ad40916916a\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"22fba508d6ad4810b35884ae00b65525488b70a2470bc86c4e4eacf3416fc0fd\"" May 16 00:15:36.052880 containerd[1508]: time="2025-05-16T00:15:36.052837002Z" level=info msg="StartContainer for \"22fba508d6ad4810b35884ae00b65525488b70a2470bc86c4e4eacf3416fc0fd\"" May 16 00:15:36.054284 containerd[1508]: time="2025-05-16T00:15:36.054256944Z" level=info msg="connecting to shim 22fba508d6ad4810b35884ae00b65525488b70a2470bc86c4e4eacf3416fc0fd" address="unix:///run/containerd/s/0f497164120df3ad64bbf6d9af0a6333e356cda3e26553a7bba30ab856155568" protocol=ttrpc version=3 May 16 00:15:36.077503 systemd[1]: Started cri-containerd-22fba508d6ad4810b35884ae00b65525488b70a2470bc86c4e4eacf3416fc0fd.scope - libcontainer container 22fba508d6ad4810b35884ae00b65525488b70a2470bc86c4e4eacf3416fc0fd. May 16 00:15:36.347859 containerd[1508]: time="2025-05-16T00:15:36.347730617Z" level=info msg="StartContainer for \"22fba508d6ad4810b35884ae00b65525488b70a2470bc86c4e4eacf3416fc0fd\" returns successfully" May 16 00:15:37.052580 containerd[1508]: time="2025-05-16T00:15:37.052521428Z" level=info msg="TaskExit event in podsandbox handler container_id:\"22fba508d6ad4810b35884ae00b65525488b70a2470bc86c4e4eacf3416fc0fd\" id:\"30de7e4878839871489aa982bb05907f0f1ca82b3946640d24b0458254fc4094\" pid:5340 exited_at:{seconds:1747354537 nanos:52241388}" May 16 00:15:37.369497 kubelet[2691]: I0516 00:15:37.368220 2691 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-847f5796b-ssmkv" podStartSLOduration=52.857781858 podStartE2EDuration="1m5.368200236s" podCreationTimestamp="2025-05-16 00:14:32 +0000 UTC" firstStartedPulling="2025-05-16 00:15:22.52371036 +0000 UTC m=+76.464164112" lastFinishedPulling="2025-05-16 00:15:35.034128748 +0000 UTC m=+88.974582490" observedRunningTime="2025-05-16 00:15:37.368195096 +0000 UTC m=+91.308648839" watchObservedRunningTime="2025-05-16 00:15:37.368200236 +0000 UTC m=+91.308653978" May 16 00:15:37.822445 systemd[1]: Started sshd@19-10.0.0.81:22-10.0.0.1:52200.service - OpenSSH per-connection server daemon (10.0.0.1:52200). May 16 00:15:38.040274 sshd[5351]: Accepted publickey for core from 10.0.0.1 port 52200 ssh2: RSA SHA256:VClJ7K4P6c9/rg+rJGjy/BA6Vatbs9UvxKJcX8Q7ZgI May 16 00:15:38.044722 sshd-session[5351]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:15:38.051621 systemd-logind[1490]: New session 20 of user core. May 16 00:15:38.064603 systemd[1]: Started session-20.scope - Session 20 of User core. May 16 00:15:38.104796 containerd[1508]: time="2025-05-16T00:15:38.104662901Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4be196b349c32d750708d16ceafa36f7763b6e3cf74594e4a0ff943728ec6ca5\" id:\"458767027cbec6f85b29a2e178f60c70e2a7685a4a4594e5ce233a5db3d2679d\" pid:5363 exited_at:{seconds:1747354538 nanos:104174818}" May 16 00:15:38.208933 sshd[5376]: Connection closed by 10.0.0.1 port 52200 May 16 00:15:38.208859 sshd-session[5351]: pam_unix(sshd:session): session closed for user core May 16 00:15:38.213942 systemd[1]: sshd@19-10.0.0.81:22-10.0.0.1:52200.service: Deactivated successfully. May 16 00:15:38.216948 systemd[1]: session-20.scope: Deactivated successfully. May 16 00:15:38.218276 systemd-logind[1490]: Session 20 logged out. Waiting for processes to exit. May 16 00:15:38.219308 systemd-logind[1490]: Removed session 20. May 16 00:15:40.785095 containerd[1508]: time="2025-05-16T00:15:40.785018556Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:15:40.830640 containerd[1508]: time="2025-05-16T00:15:40.830512894Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.0: active requests=0, bytes read=8758390" May 16 00:15:40.845284 containerd[1508]: time="2025-05-16T00:15:40.845217483Z" level=info msg="ImageCreate event name:\"sha256:d5b08093b7928c0ac1122e59edf69b2e58c6d10ecc8b9e5cffeb809a956dc48e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:15:40.865902 containerd[1508]: time="2025-05-16T00:15:40.865815417Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:27883a4104876fe239311dd93ce6efd0c4a87de7163d57a4c8d96bd65a287ffd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:15:40.866491 containerd[1508]: time="2025-05-16T00:15:40.866450897Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.0\" with image id \"sha256:d5b08093b7928c0ac1122e59edf69b2e58c6d10ecc8b9e5cffeb809a956dc48e\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:27883a4104876fe239311dd93ce6efd0c4a87de7163d57a4c8d96bd65a287ffd\", size \"10251093\" in 5.830558857s" May 16 00:15:40.866491 containerd[1508]: time="2025-05-16T00:15:40.866484763Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.0\" returns image reference \"sha256:d5b08093b7928c0ac1122e59edf69b2e58c6d10ecc8b9e5cffeb809a956dc48e\"" May 16 00:15:40.867790 containerd[1508]: time="2025-05-16T00:15:40.867676069Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 16 00:15:40.868603 containerd[1508]: time="2025-05-16T00:15:40.868572073Z" level=info msg="CreateContainer within sandbox \"34f44247bde1b323feb09ac84f67e192d6401b85a4a8971628c8b444c527a760\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" May 16 00:15:41.079987 containerd[1508]: time="2025-05-16T00:15:41.079780882Z" level=info msg="Container 27111103daa85761d9c0c167ee7bb751c969111c43888ccf44337a6de8b4749a: CDI devices from CRI Config.CDIDevices: []" May 16 00:15:41.130677 containerd[1508]: time="2025-05-16T00:15:41.130601081Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 16 00:15:41.236059 containerd[1508]: time="2025-05-16T00:15:41.235961491Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 16 00:15:41.242424 containerd[1508]: time="2025-05-16T00:15:41.242352076Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 16 00:15:41.249568 kubelet[2691]: E0516 00:15:41.249519 2691 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 16 00:15:41.250027 kubelet[2691]: E0516 00:15:41.249582 2691 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 16 00:15:41.250500 containerd[1508]: time="2025-05-16T00:15:41.250454635Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\"" May 16 00:15:41.250880 kubelet[2691]: E0516 00:15:41.250810 2691 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4rhrd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-78d55f7ddc-d79sc_calico-system(b11c09e5-3cbb-468c-abbe-dc156fab2b5c): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 16 00:15:41.252026 kubelet[2691]: E0516 00:15:41.251990 2691 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-d79sc" podUID="b11c09e5-3cbb-468c-abbe-dc156fab2b5c" May 16 00:15:41.483241 containerd[1508]: time="2025-05-16T00:15:41.483197720Z" level=info msg="CreateContainer within sandbox \"34f44247bde1b323feb09ac84f67e192d6401b85a4a8971628c8b444c527a760\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"27111103daa85761d9c0c167ee7bb751c969111c43888ccf44337a6de8b4749a\"" May 16 00:15:41.483716 containerd[1508]: time="2025-05-16T00:15:41.483677789Z" level=info msg="StartContainer for \"27111103daa85761d9c0c167ee7bb751c969111c43888ccf44337a6de8b4749a\"" May 16 00:15:41.485558 containerd[1508]: time="2025-05-16T00:15:41.485519536Z" level=info msg="connecting to shim 27111103daa85761d9c0c167ee7bb751c969111c43888ccf44337a6de8b4749a" address="unix:///run/containerd/s/0e28885da7537c98de420dba532d7f60aff044f742c2a80117b47e17c8dfdaa9" protocol=ttrpc version=3 May 16 00:15:41.508544 systemd[1]: Started cri-containerd-27111103daa85761d9c0c167ee7bb751c969111c43888ccf44337a6de8b4749a.scope - libcontainer container 27111103daa85761d9c0c167ee7bb751c969111c43888ccf44337a6de8b4749a. May 16 00:15:41.627720 containerd[1508]: time="2025-05-16T00:15:41.627668804Z" level=info msg="StartContainer for \"27111103daa85761d9c0c167ee7bb751c969111c43888ccf44337a6de8b4749a\" returns successfully" May 16 00:15:42.020084 kubelet[2691]: E0516 00:15:42.020033 2691 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-d79sc" podUID="b11c09e5-3cbb-468c-abbe-dc156fab2b5c" May 16 00:15:42.634117 containerd[1508]: time="2025-05-16T00:15:42.634060625Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:15:42.691038 containerd[1508]: time="2025-05-16T00:15:42.690947859Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.0: active requests=0, bytes read=77" May 16 00:15:42.692973 containerd[1508]: time="2025-05-16T00:15:42.692931842Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" with image id \"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ad7d2e76f15777636c5d91c108d7655659b38fe8970255050ffa51223eb96ff4\", size \"48745150\" in 1.442448621s" May 16 00:15:42.693072 containerd[1508]: time="2025-05-16T00:15:42.692977611Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" returns image reference \"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\"" May 16 00:15:42.694268 containerd[1508]: time="2025-05-16T00:15:42.694231099Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\"" May 16 00:15:42.695279 containerd[1508]: time="2025-05-16T00:15:42.695252138Z" level=info msg="CreateContainer within sandbox \"b584a8b85c8065800172775cfd3e51810267556e61ef4b8548488d5c03aba69b\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 16 00:15:43.041596 containerd[1508]: time="2025-05-16T00:15:43.041538914Z" level=info msg="Container 1668ab043c3d1e8ba710d30a365f3cc5453ddb25324245ea06c70e96743361a9: CDI devices from CRI Config.CDIDevices: []" May 16 00:15:43.223048 systemd[1]: Started sshd@20-10.0.0.81:22-10.0.0.1:53968.service - OpenSSH per-connection server daemon (10.0.0.1:53968). May 16 00:15:43.777701 sshd[5428]: Accepted publickey for core from 10.0.0.1 port 53968 ssh2: RSA SHA256:VClJ7K4P6c9/rg+rJGjy/BA6Vatbs9UvxKJcX8Q7ZgI May 16 00:15:43.779473 sshd-session[5428]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:15:43.783938 systemd-logind[1490]: New session 21 of user core. May 16 00:15:43.790487 systemd[1]: Started session-21.scope - Session 21 of User core. May 16 00:15:43.970918 sshd[5430]: Connection closed by 10.0.0.1 port 53968 May 16 00:15:43.971247 sshd-session[5428]: pam_unix(sshd:session): session closed for user core May 16 00:15:43.975804 systemd[1]: sshd@20-10.0.0.81:22-10.0.0.1:53968.service: Deactivated successfully. May 16 00:15:43.977894 systemd[1]: session-21.scope: Deactivated successfully. May 16 00:15:43.978781 systemd-logind[1490]: Session 21 logged out. Waiting for processes to exit. May 16 00:15:43.979681 systemd-logind[1490]: Removed session 21. May 16 00:15:44.701602 containerd[1508]: time="2025-05-16T00:15:44.701555758Z" level=info msg="CreateContainer within sandbox \"b584a8b85c8065800172775cfd3e51810267556e61ef4b8548488d5c03aba69b\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"1668ab043c3d1e8ba710d30a365f3cc5453ddb25324245ea06c70e96743361a9\"" May 16 00:15:44.702416 containerd[1508]: time="2025-05-16T00:15:44.702179598Z" level=info msg="StartContainer for \"1668ab043c3d1e8ba710d30a365f3cc5453ddb25324245ea06c70e96743361a9\"" May 16 00:15:44.703633 containerd[1508]: time="2025-05-16T00:15:44.703606946Z" level=info msg="connecting to shim 1668ab043c3d1e8ba710d30a365f3cc5453ddb25324245ea06c70e96743361a9" address="unix:///run/containerd/s/092f6fc59feee0dcc30e0258d2249f4bd6c4b91fb361c01dded764dbed55b46f" protocol=ttrpc version=3 May 16 00:15:44.728550 systemd[1]: Started cri-containerd-1668ab043c3d1e8ba710d30a365f3cc5453ddb25324245ea06c70e96743361a9.scope - libcontainer container 1668ab043c3d1e8ba710d30a365f3cc5453ddb25324245ea06c70e96743361a9. May 16 00:15:44.877634 containerd[1508]: time="2025-05-16T00:15:44.877580874Z" level=info msg="StartContainer for \"1668ab043c3d1e8ba710d30a365f3cc5453ddb25324245ea06c70e96743361a9\" returns successfully" May 16 00:15:45.073877 kubelet[2691]: I0516 00:15:45.073796 2691 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-67fc6f48b5-rr84h" podStartSLOduration=57.248680363 podStartE2EDuration="1m16.073776432s" podCreationTimestamp="2025-05-16 00:14:29 +0000 UTC" firstStartedPulling="2025-05-16 00:15:23.868894284 +0000 UTC m=+77.809348026" lastFinishedPulling="2025-05-16 00:15:42.693990353 +0000 UTC m=+96.634444095" observedRunningTime="2025-05-16 00:15:45.073073459 +0000 UTC m=+99.013527221" watchObservedRunningTime="2025-05-16 00:15:45.073776432 +0000 UTC m=+99.014230174" May 16 00:15:45.149878 kubelet[2691]: E0516 00:15:45.149840 2691 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:15:45.150103 kubelet[2691]: E0516 00:15:45.150025 2691 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:15:48.986052 systemd[1]: Started sshd@21-10.0.0.81:22-10.0.0.1:44006.service - OpenSSH per-connection server daemon (10.0.0.1:44006). May 16 00:15:48.989254 containerd[1508]: time="2025-05-16T00:15:48.989206209Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:15:49.059508 containerd[1508]: time="2025-05-16T00:15:49.059233111Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0: active requests=0, bytes read=14705639" May 16 00:15:49.059897 sshd[5487]: Accepted publickey for core from 10.0.0.1 port 44006 ssh2: RSA SHA256:VClJ7K4P6c9/rg+rJGjy/BA6Vatbs9UvxKJcX8Q7ZgI May 16 00:15:49.078355 sshd-session[5487]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:15:49.091491 systemd-logind[1490]: New session 22 of user core. May 16 00:15:49.100501 systemd[1]: Started session-22.scope - Session 22 of User core. May 16 00:15:49.778576 containerd[1508]: time="2025-05-16T00:15:49.778527472Z" level=info msg="ImageCreate event name:\"sha256:45c8692ffc029387ee93ba83da8ad26da9749cf2ba6ed03981f8f9933ed5a5b0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:15:49.850454 containerd[1508]: time="2025-05-16T00:15:49.849704819Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:dca5c16181edde2e860463615523ce457cd9dcfca85b7cfdcd6f3ea7de6f2ac8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:15:49.850454 containerd[1508]: time="2025-05-16T00:15:49.850338911Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\" with image id \"sha256:45c8692ffc029387ee93ba83da8ad26da9749cf2ba6ed03981f8f9933ed5a5b0\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:dca5c16181edde2e860463615523ce457cd9dcfca85b7cfdcd6f3ea7de6f2ac8\", size \"16198294\" in 7.156073344s" May 16 00:15:49.850454 containerd[1508]: time="2025-05-16T00:15:49.850395521Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\" returns image reference \"sha256:45c8692ffc029387ee93ba83da8ad26da9749cf2ba6ed03981f8f9933ed5a5b0\"" May 16 00:15:49.857019 containerd[1508]: time="2025-05-16T00:15:49.856978165Z" level=info msg="CreateContainer within sandbox \"34f44247bde1b323feb09ac84f67e192d6401b85a4a8971628c8b444c527a760\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" May 16 00:15:49.950202 sshd[5489]: Connection closed by 10.0.0.1 port 44006 May 16 00:15:49.950559 sshd-session[5487]: pam_unix(sshd:session): session closed for user core May 16 00:15:49.955468 systemd[1]: sshd@21-10.0.0.81:22-10.0.0.1:44006.service: Deactivated successfully. May 16 00:15:49.957749 systemd[1]: session-22.scope: Deactivated successfully. May 16 00:15:49.958513 systemd-logind[1490]: Session 22 logged out. Waiting for processes to exit. May 16 00:15:49.959497 systemd-logind[1490]: Removed session 22. May 16 00:15:50.060779 containerd[1508]: time="2025-05-16T00:15:50.060663435Z" level=info msg="Container 811a4a8fbce2f271f54413604583a0ebc53d20ec5f30dd9759b416050bdc36f8: CDI devices from CRI Config.CDIDevices: []" May 16 00:15:50.171832 containerd[1508]: time="2025-05-16T00:15:50.171779200Z" level=info msg="CreateContainer within sandbox \"34f44247bde1b323feb09ac84f67e192d6401b85a4a8971628c8b444c527a760\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"811a4a8fbce2f271f54413604583a0ebc53d20ec5f30dd9759b416050bdc36f8\"" May 16 00:15:50.172328 containerd[1508]: time="2025-05-16T00:15:50.172278801Z" level=info msg="StartContainer for \"811a4a8fbce2f271f54413604583a0ebc53d20ec5f30dd9759b416050bdc36f8\"" May 16 00:15:50.173838 containerd[1508]: time="2025-05-16T00:15:50.173811642Z" level=info msg="connecting to shim 811a4a8fbce2f271f54413604583a0ebc53d20ec5f30dd9759b416050bdc36f8" address="unix:///run/containerd/s/0e28885da7537c98de420dba532d7f60aff044f742c2a80117b47e17c8dfdaa9" protocol=ttrpc version=3 May 16 00:15:50.195559 systemd[1]: Started cri-containerd-811a4a8fbce2f271f54413604583a0ebc53d20ec5f30dd9759b416050bdc36f8.scope - libcontainer container 811a4a8fbce2f271f54413604583a0ebc53d20ec5f30dd9759b416050bdc36f8. May 16 00:15:50.300742 containerd[1508]: time="2025-05-16T00:15:50.300694043Z" level=info msg="StartContainer for \"811a4a8fbce2f271f54413604583a0ebc53d20ec5f30dd9759b416050bdc36f8\" returns successfully" May 16 00:15:50.828669 kubelet[2691]: I0516 00:15:50.828633 2691 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 May 16 00:15:50.828669 kubelet[2691]: I0516 00:15:50.828664 2691 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock May 16 00:15:51.489963 kubelet[2691]: I0516 00:15:51.489856 2691 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-hz4bz" podStartSLOduration=52.307282328 podStartE2EDuration="1m19.489841616s" podCreationTimestamp="2025-05-16 00:14:32 +0000 UTC" firstStartedPulling="2025-05-16 00:15:22.668598405 +0000 UTC m=+76.609052147" lastFinishedPulling="2025-05-16 00:15:49.851157693 +0000 UTC m=+103.791611435" observedRunningTime="2025-05-16 00:15:51.489346622 +0000 UTC m=+105.429800364" watchObservedRunningTime="2025-05-16 00:15:51.489841616 +0000 UTC m=+105.430295358" May 16 00:15:54.963680 systemd[1]: Started sshd@22-10.0.0.81:22-10.0.0.1:44020.service - OpenSSH per-connection server daemon (10.0.0.1:44020). May 16 00:15:55.017597 sshd[5547]: Accepted publickey for core from 10.0.0.1 port 44020 ssh2: RSA SHA256:VClJ7K4P6c9/rg+rJGjy/BA6Vatbs9UvxKJcX8Q7ZgI May 16 00:15:55.019320 sshd-session[5547]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:15:55.023785 systemd-logind[1490]: New session 23 of user core. May 16 00:15:55.030492 systemd[1]: Started session-23.scope - Session 23 of User core. May 16 00:15:55.156570 sshd[5549]: Connection closed by 10.0.0.1 port 44020 May 16 00:15:55.156929 sshd-session[5547]: pam_unix(sshd:session): session closed for user core May 16 00:15:55.165186 systemd[1]: sshd@22-10.0.0.81:22-10.0.0.1:44020.service: Deactivated successfully. May 16 00:15:55.167233 systemd[1]: session-23.scope: Deactivated successfully. May 16 00:15:55.168802 systemd-logind[1490]: Session 23 logged out. Waiting for processes to exit. May 16 00:15:55.170799 systemd[1]: Started sshd@23-10.0.0.81:22-10.0.0.1:44030.service - OpenSSH per-connection server daemon (10.0.0.1:44030). May 16 00:15:55.171762 systemd-logind[1490]: Removed session 23. May 16 00:15:55.225834 sshd[5562]: Accepted publickey for core from 10.0.0.1 port 44030 ssh2: RSA SHA256:VClJ7K4P6c9/rg+rJGjy/BA6Vatbs9UvxKJcX8Q7ZgI May 16 00:15:55.227242 sshd-session[5562]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:15:55.231669 systemd-logind[1490]: New session 24 of user core. May 16 00:15:55.239505 systemd[1]: Started session-24.scope - Session 24 of User core. May 16 00:15:55.620678 sshd[5565]: Connection closed by 10.0.0.1 port 44030 May 16 00:15:55.622903 sshd-session[5562]: pam_unix(sshd:session): session closed for user core May 16 00:15:55.632454 systemd[1]: sshd@23-10.0.0.81:22-10.0.0.1:44030.service: Deactivated successfully. May 16 00:15:55.634968 systemd[1]: session-24.scope: Deactivated successfully. May 16 00:15:55.636827 systemd-logind[1490]: Session 24 logged out. Waiting for processes to exit. May 16 00:15:55.638228 systemd[1]: Started sshd@24-10.0.0.81:22-10.0.0.1:44036.service - OpenSSH per-connection server daemon (10.0.0.1:44036). May 16 00:15:55.640778 systemd-logind[1490]: Removed session 24. May 16 00:15:55.710837 sshd[5577]: Accepted publickey for core from 10.0.0.1 port 44036 ssh2: RSA SHA256:VClJ7K4P6c9/rg+rJGjy/BA6Vatbs9UvxKJcX8Q7ZgI May 16 00:15:55.712586 sshd-session[5577]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:15:55.717753 systemd-logind[1490]: New session 25 of user core. May 16 00:15:55.721546 systemd[1]: Started session-25.scope - Session 25 of User core. May 16 00:15:56.150531 containerd[1508]: time="2025-05-16T00:15:56.150418741Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 16 00:15:56.408893 containerd[1508]: time="2025-05-16T00:15:56.408086794Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 16 00:15:56.518949 containerd[1508]: time="2025-05-16T00:15:56.518854153Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 16 00:15:56.518949 containerd[1508]: time="2025-05-16T00:15:56.518913177Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 16 00:15:56.519283 kubelet[2691]: E0516 00:15:56.519216 2691 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 16 00:15:56.519806 kubelet[2691]: E0516 00:15:56.519283 2691 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 16 00:15:56.519806 kubelet[2691]: E0516 00:15:56.519456 2691 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4rhrd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-78d55f7ddc-d79sc_calico-system(b11c09e5-3cbb-468c-abbe-dc156fab2b5c): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 16 00:15:56.520743 kubelet[2691]: E0516 00:15:56.520661 2691 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-d79sc" podUID="b11c09e5-3cbb-468c-abbe-dc156fab2b5c" May 16 00:15:56.814276 sshd[5580]: Connection closed by 10.0.0.1 port 44036 May 16 00:15:56.814903 sshd-session[5577]: pam_unix(sshd:session): session closed for user core May 16 00:15:56.825573 systemd[1]: sshd@24-10.0.0.81:22-10.0.0.1:44036.service: Deactivated successfully. May 16 00:15:56.829087 systemd[1]: session-25.scope: Deactivated successfully. May 16 00:15:56.830792 systemd-logind[1490]: Session 25 logged out. Waiting for processes to exit. May 16 00:15:56.833982 systemd[1]: Started sshd@25-10.0.0.81:22-10.0.0.1:44048.service - OpenSSH per-connection server daemon (10.0.0.1:44048). May 16 00:15:56.836011 systemd-logind[1490]: Removed session 25. May 16 00:15:56.892406 sshd[5598]: Accepted publickey for core from 10.0.0.1 port 44048 ssh2: RSA SHA256:VClJ7K4P6c9/rg+rJGjy/BA6Vatbs9UvxKJcX8Q7ZgI May 16 00:15:56.895029 sshd-session[5598]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:15:56.902653 systemd-logind[1490]: New session 26 of user core. May 16 00:15:56.911573 systemd[1]: Started session-26.scope - Session 26 of User core. May 16 00:15:57.441565 sshd[5601]: Connection closed by 10.0.0.1 port 44048 May 16 00:15:57.445079 sshd-session[5598]: pam_unix(sshd:session): session closed for user core May 16 00:15:57.480061 systemd[1]: sshd@25-10.0.0.81:22-10.0.0.1:44048.service: Deactivated successfully. May 16 00:15:57.489053 systemd[1]: session-26.scope: Deactivated successfully. May 16 00:15:57.502736 systemd-logind[1490]: Session 26 logged out. Waiting for processes to exit. May 16 00:15:57.503699 systemd[1]: Started sshd@26-10.0.0.81:22-10.0.0.1:44052.service - OpenSSH per-connection server daemon (10.0.0.1:44052). May 16 00:15:57.504949 systemd-logind[1490]: Removed session 26. May 16 00:15:57.588544 sshd[5612]: Accepted publickey for core from 10.0.0.1 port 44052 ssh2: RSA SHA256:VClJ7K4P6c9/rg+rJGjy/BA6Vatbs9UvxKJcX8Q7ZgI May 16 00:15:57.591946 sshd-session[5612]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:15:57.628545 systemd-logind[1490]: New session 27 of user core. May 16 00:15:57.649033 systemd[1]: Started session-27.scope - Session 27 of User core. May 16 00:15:57.993565 sshd[5615]: Connection closed by 10.0.0.1 port 44052 May 16 00:15:57.997702 sshd-session[5612]: pam_unix(sshd:session): session closed for user core May 16 00:15:58.002659 systemd-logind[1490]: Session 27 logged out. Waiting for processes to exit. May 16 00:15:58.005885 systemd[1]: sshd@26-10.0.0.81:22-10.0.0.1:44052.service: Deactivated successfully. May 16 00:15:58.011672 systemd[1]: session-27.scope: Deactivated successfully. May 16 00:15:58.014590 systemd-logind[1490]: Removed session 27. May 16 00:16:03.008580 systemd[1]: Started sshd@27-10.0.0.81:22-10.0.0.1:43030.service - OpenSSH per-connection server daemon (10.0.0.1:43030). May 16 00:16:03.059181 sshd[5634]: Accepted publickey for core from 10.0.0.1 port 43030 ssh2: RSA SHA256:VClJ7K4P6c9/rg+rJGjy/BA6Vatbs9UvxKJcX8Q7ZgI May 16 00:16:03.063453 sshd-session[5634]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:16:03.072710 systemd-logind[1490]: New session 28 of user core. May 16 00:16:03.079618 systemd[1]: Started session-28.scope - Session 28 of User core. May 16 00:16:03.285079 sshd[5637]: Connection closed by 10.0.0.1 port 43030 May 16 00:16:03.285371 sshd-session[5634]: pam_unix(sshd:session): session closed for user core May 16 00:16:03.291058 systemd[1]: sshd@27-10.0.0.81:22-10.0.0.1:43030.service: Deactivated successfully. May 16 00:16:03.294288 systemd[1]: session-28.scope: Deactivated successfully. May 16 00:16:03.296911 systemd-logind[1490]: Session 28 logged out. Waiting for processes to exit. May 16 00:16:03.299318 systemd-logind[1490]: Removed session 28. May 16 00:16:06.184997 containerd[1508]: time="2025-05-16T00:16:06.184940939Z" level=info msg="TaskExit event in podsandbox handler container_id:\"22fba508d6ad4810b35884ae00b65525488b70a2470bc86c4e4eacf3416fc0fd\" id:\"823c7b78d3bff2067f190d7aae5aea21ec95c630d2d4308067870577e21eed86\" pid:5662 exited_at:{seconds:1747354566 nanos:184619902}" May 16 00:16:07.057204 containerd[1508]: time="2025-05-16T00:16:07.057138702Z" level=info msg="TaskExit event in podsandbox handler container_id:\"22fba508d6ad4810b35884ae00b65525488b70a2470bc86c4e4eacf3416fc0fd\" id:\"638409aeebefd08bb7f8e26fe2b1a3fceeee0a506169eb9883e982113fe6797c\" pid:5685 exited_at:{seconds:1747354567 nanos:56889385}" May 16 00:16:08.013759 containerd[1508]: time="2025-05-16T00:16:08.013710652Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4be196b349c32d750708d16ceafa36f7763b6e3cf74594e4a0ff943728ec6ca5\" id:\"4dee0c33771cab347ccc9995cc8e344873fbbe41284b4177116d27300b820fcf\" pid:5705 exited_at:{seconds:1747354568 nanos:13313415}" May 16 00:16:08.298250 systemd[1]: Started sshd@28-10.0.0.81:22-10.0.0.1:54006.service - OpenSSH per-connection server daemon (10.0.0.1:54006). May 16 00:16:08.356962 sshd[5718]: Accepted publickey for core from 10.0.0.1 port 54006 ssh2: RSA SHA256:VClJ7K4P6c9/rg+rJGjy/BA6Vatbs9UvxKJcX8Q7ZgI May 16 00:16:08.358736 sshd-session[5718]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:16:08.363259 systemd-logind[1490]: New session 29 of user core. May 16 00:16:08.367534 systemd[1]: Started session-29.scope - Session 29 of User core. May 16 00:16:08.547688 sshd[5720]: Connection closed by 10.0.0.1 port 54006 May 16 00:16:08.548288 sshd-session[5718]: pam_unix(sshd:session): session closed for user core May 16 00:16:08.554407 systemd[1]: sshd@28-10.0.0.81:22-10.0.0.1:54006.service: Deactivated successfully. May 16 00:16:08.557530 systemd[1]: session-29.scope: Deactivated successfully. May 16 00:16:08.558836 systemd-logind[1490]: Session 29 logged out. Waiting for processes to exit. May 16 00:16:08.560132 systemd-logind[1490]: Removed session 29. May 16 00:16:12.152703 kubelet[2691]: E0516 00:16:12.152598 2691 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-d79sc" podUID="b11c09e5-3cbb-468c-abbe-dc156fab2b5c" May 16 00:16:13.571553 systemd[1]: Started sshd@29-10.0.0.81:22-10.0.0.1:54010.service - OpenSSH per-connection server daemon (10.0.0.1:54010). May 16 00:16:13.618716 sshd[5737]: Accepted publickey for core from 10.0.0.1 port 54010 ssh2: RSA SHA256:VClJ7K4P6c9/rg+rJGjy/BA6Vatbs9UvxKJcX8Q7ZgI May 16 00:16:13.620387 sshd-session[5737]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:16:13.625823 systemd-logind[1490]: New session 30 of user core. May 16 00:16:13.638514 systemd[1]: Started session-30.scope - Session 30 of User core. May 16 00:16:13.795179 sshd[5739]: Connection closed by 10.0.0.1 port 54010 May 16 00:16:13.795539 sshd-session[5737]: pam_unix(sshd:session): session closed for user core May 16 00:16:13.800549 systemd[1]: sshd@29-10.0.0.81:22-10.0.0.1:54010.service: Deactivated successfully. May 16 00:16:13.802754 systemd[1]: session-30.scope: Deactivated successfully. May 16 00:16:13.803501 systemd-logind[1490]: Session 30 logged out. Waiting for processes to exit. May 16 00:16:13.804347 systemd-logind[1490]: Removed session 30. May 16 00:16:18.808989 systemd[1]: Started sshd@30-10.0.0.81:22-10.0.0.1:48916.service - OpenSSH per-connection server daemon (10.0.0.1:48916). May 16 00:16:18.874228 sshd[5754]: Accepted publickey for core from 10.0.0.1 port 48916 ssh2: RSA SHA256:VClJ7K4P6c9/rg+rJGjy/BA6Vatbs9UvxKJcX8Q7ZgI May 16 00:16:18.876407 sshd-session[5754]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:16:18.881179 systemd-logind[1490]: New session 31 of user core. May 16 00:16:18.888510 systemd[1]: Started session-31.scope - Session 31 of User core. May 16 00:16:19.041485 sshd[5756]: Connection closed by 10.0.0.1 port 48916 May 16 00:16:19.041849 sshd-session[5754]: pam_unix(sshd:session): session closed for user core May 16 00:16:19.048571 systemd[1]: sshd@30-10.0.0.81:22-10.0.0.1:48916.service: Deactivated successfully. May 16 00:16:19.050815 systemd[1]: session-31.scope: Deactivated successfully. May 16 00:16:19.051506 systemd-logind[1490]: Session 31 logged out. Waiting for processes to exit. May 16 00:16:19.052455 systemd-logind[1490]: Removed session 31. May 16 00:16:23.151983 containerd[1508]: time="2025-05-16T00:16:23.151519186Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 16 00:16:23.414038 containerd[1508]: time="2025-05-16T00:16:23.413890226Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 16 00:16:23.431155 containerd[1508]: time="2025-05-16T00:16:23.431092710Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 16 00:16:23.431348 containerd[1508]: time="2025-05-16T00:16:23.431137548Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 16 00:16:23.431459 kubelet[2691]: E0516 00:16:23.431393 2691 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 16 00:16:23.431881 kubelet[2691]: E0516 00:16:23.431466 2691 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 16 00:16:23.431881 kubelet[2691]: E0516 00:16:23.431611 2691 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4rhrd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-78d55f7ddc-d79sc_calico-system(b11c09e5-3cbb-468c-abbe-dc156fab2b5c): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 16 00:16:23.432859 kubelet[2691]: E0516 00:16:23.432790 2691 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-d79sc" podUID="b11c09e5-3cbb-468c-abbe-dc156fab2b5c" May 16 00:16:24.056384 systemd[1]: Started sshd@31-10.0.0.81:22-10.0.0.1:48926.service - OpenSSH per-connection server daemon (10.0.0.1:48926). May 16 00:16:24.115297 sshd[5770]: Accepted publickey for core from 10.0.0.1 port 48926 ssh2: RSA SHA256:VClJ7K4P6c9/rg+rJGjy/BA6Vatbs9UvxKJcX8Q7ZgI May 16 00:16:24.117140 sshd-session[5770]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:16:24.121946 systemd-logind[1490]: New session 32 of user core. May 16 00:16:24.130541 systemd[1]: Started session-32.scope - Session 32 of User core. May 16 00:16:24.327930 sshd[5772]: Connection closed by 10.0.0.1 port 48926 May 16 00:16:24.328222 sshd-session[5770]: pam_unix(sshd:session): session closed for user core May 16 00:16:24.332740 systemd[1]: sshd@31-10.0.0.81:22-10.0.0.1:48926.service: Deactivated successfully. May 16 00:16:24.334955 systemd[1]: session-32.scope: Deactivated successfully. May 16 00:16:24.335938 systemd-logind[1490]: Session 32 logged out. Waiting for processes to exit. May 16 00:16:24.337973 systemd-logind[1490]: Removed session 32. May 16 00:16:29.149795 kubelet[2691]: E0516 00:16:29.149741 2691 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:16:29.343423 systemd[1]: Started sshd@32-10.0.0.81:22-10.0.0.1:38782.service - OpenSSH per-connection server daemon (10.0.0.1:38782). May 16 00:16:29.391459 sshd[5785]: Accepted publickey for core from 10.0.0.1 port 38782 ssh2: RSA SHA256:VClJ7K4P6c9/rg+rJGjy/BA6Vatbs9UvxKJcX8Q7ZgI May 16 00:16:29.393477 sshd-session[5785]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:16:29.400287 systemd-logind[1490]: New session 33 of user core. May 16 00:16:29.408593 systemd[1]: Started session-33.scope - Session 33 of User core. May 16 00:16:29.570631 sshd[5787]: Connection closed by 10.0.0.1 port 38782 May 16 00:16:29.570994 sshd-session[5785]: pam_unix(sshd:session): session closed for user core May 16 00:16:29.577690 systemd[1]: sshd@32-10.0.0.81:22-10.0.0.1:38782.service: Deactivated successfully. May 16 00:16:29.580234 systemd[1]: session-33.scope: Deactivated successfully. May 16 00:16:29.581119 systemd-logind[1490]: Session 33 logged out. Waiting for processes to exit. May 16 00:16:29.582125 systemd-logind[1490]: Removed session 33.