May 13 12:35:03.806874 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] May 13 12:35:03.806894 kernel: Linux version 6.12.28-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT Tue May 13 11:28:23 -00 2025 May 13 12:35:03.806932 kernel: KASLR enabled May 13 12:35:03.806939 kernel: efi: EFI v2.7 by EDK II May 13 12:35:03.806944 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb832018 ACPI 2.0=0xdbfd0018 RNG=0xdbfd0a18 MEMRESERVE=0xdb838218 May 13 12:35:03.806950 kernel: random: crng init done May 13 12:35:03.806956 kernel: secureboot: Secure boot disabled May 13 12:35:03.806962 kernel: ACPI: Early table checksum verification disabled May 13 12:35:03.806968 kernel: ACPI: RSDP 0x00000000DBFD0018 000024 (v02 BOCHS ) May 13 12:35:03.806975 kernel: ACPI: XSDT 0x00000000DBFD0F18 000064 (v01 BOCHS BXPC 00000001 01000013) May 13 12:35:03.806981 kernel: ACPI: FACP 0x00000000DBFD0B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) May 13 12:35:03.806987 kernel: ACPI: DSDT 0x00000000DBF0E018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 13 12:35:03.806993 kernel: ACPI: APIC 0x00000000DBFD0C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) May 13 12:35:03.806999 kernel: ACPI: PPTT 0x00000000DBFD0098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) May 13 12:35:03.807006 kernel: ACPI: GTDT 0x00000000DBFD0818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 13 12:35:03.807013 kernel: ACPI: MCFG 0x00000000DBFD0A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 12:35:03.807020 kernel: ACPI: SPCR 0x00000000DBFD0918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 13 12:35:03.807026 kernel: ACPI: DBG2 0x00000000DBFD0998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) May 13 12:35:03.807032 kernel: ACPI: IORT 0x00000000DBFD0198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 13 12:35:03.807038 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 May 13 12:35:03.807044 kernel: ACPI: Use ACPI SPCR as default console: Yes May 13 12:35:03.807050 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] May 13 12:35:03.807056 kernel: NODE_DATA(0) allocated [mem 0xdc965dc0-0xdc96cfff] May 13 12:35:03.807062 kernel: Zone ranges: May 13 12:35:03.807068 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] May 13 12:35:03.807075 kernel: DMA32 empty May 13 12:35:03.807081 kernel: Normal empty May 13 12:35:03.807087 kernel: Device empty May 13 12:35:03.807093 kernel: Movable zone start for each node May 13 12:35:03.807098 kernel: Early memory node ranges May 13 12:35:03.807105 kernel: node 0: [mem 0x0000000040000000-0x00000000db81ffff] May 13 12:35:03.807111 kernel: node 0: [mem 0x00000000db820000-0x00000000db82ffff] May 13 12:35:03.807117 kernel: node 0: [mem 0x00000000db830000-0x00000000dc09ffff] May 13 12:35:03.807123 kernel: node 0: [mem 0x00000000dc0a0000-0x00000000dc2dffff] May 13 12:35:03.807129 kernel: node 0: [mem 0x00000000dc2e0000-0x00000000dc36ffff] May 13 12:35:03.807135 kernel: node 0: [mem 0x00000000dc370000-0x00000000dc45ffff] May 13 12:35:03.807141 kernel: node 0: [mem 0x00000000dc460000-0x00000000dc52ffff] May 13 12:35:03.807148 kernel: node 0: [mem 0x00000000dc530000-0x00000000dc5cffff] May 13 12:35:03.807154 kernel: node 0: [mem 0x00000000dc5d0000-0x00000000dce1ffff] May 13 12:35:03.807160 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] May 13 12:35:03.807169 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] May 13 12:35:03.807176 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] May 13 12:35:03.807182 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] May 13 12:35:03.807190 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] May 13 12:35:03.807196 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges May 13 12:35:03.807210 kernel: psci: probing for conduit method from ACPI. May 13 12:35:03.807217 kernel: psci: PSCIv1.1 detected in firmware. May 13 12:35:03.807223 kernel: psci: Using standard PSCI v0.2 function IDs May 13 12:35:03.807230 kernel: psci: Trusted OS migration not required May 13 12:35:03.807236 kernel: psci: SMC Calling Convention v1.1 May 13 12:35:03.807242 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) May 13 12:35:03.807249 kernel: percpu: Embedded 33 pages/cpu s98136 r8192 d28840 u135168 May 13 12:35:03.807255 kernel: pcpu-alloc: s98136 r8192 d28840 u135168 alloc=33*4096 May 13 12:35:03.807264 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 May 13 12:35:03.807270 kernel: Detected PIPT I-cache on CPU0 May 13 12:35:03.807276 kernel: CPU features: detected: GIC system register CPU interface May 13 12:35:03.807283 kernel: CPU features: detected: Spectre-v4 May 13 12:35:03.807289 kernel: CPU features: detected: Spectre-BHB May 13 12:35:03.807295 kernel: CPU features: kernel page table isolation forced ON by KASLR May 13 12:35:03.807302 kernel: CPU features: detected: Kernel page table isolation (KPTI) May 13 12:35:03.807308 kernel: CPU features: detected: ARM erratum 1418040 May 13 12:35:03.807314 kernel: CPU features: detected: SSBS not fully self-synchronizing May 13 12:35:03.807320 kernel: alternatives: applying boot alternatives May 13 12:35:03.807328 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=b20e935bbd8772a1b0c6883755acb6e2a52b7a903a0b8e12c8ff59ca86b84928 May 13 12:35:03.807336 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 13 12:35:03.807346 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 13 12:35:03.807352 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 13 12:35:03.807359 kernel: Fallback order for Node 0: 0 May 13 12:35:03.807365 kernel: Built 1 zonelists, mobility grouping on. Total pages: 643072 May 13 12:35:03.807371 kernel: Policy zone: DMA May 13 12:35:03.807378 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 13 12:35:03.807384 kernel: software IO TLB: SWIOTLB bounce buffer size adjusted to 2MB May 13 12:35:03.807390 kernel: software IO TLB: area num 4. May 13 12:35:03.807396 kernel: software IO TLB: SWIOTLB bounce buffer size roundup to 4MB May 13 12:35:03.807403 kernel: software IO TLB: mapped [mem 0x00000000d8c00000-0x00000000d9000000] (4MB) May 13 12:35:03.807409 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 13 12:35:03.807418 kernel: rcu: Preemptible hierarchical RCU implementation. May 13 12:35:03.807425 kernel: rcu: RCU event tracing is enabled. May 13 12:35:03.807432 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 13 12:35:03.807438 kernel: Trampoline variant of Tasks RCU enabled. May 13 12:35:03.807445 kernel: Tracing variant of Tasks RCU enabled. May 13 12:35:03.807451 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 13 12:35:03.807457 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 13 12:35:03.807464 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 13 12:35:03.807470 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 13 12:35:03.807477 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 May 13 12:35:03.807483 kernel: GICv3: 256 SPIs implemented May 13 12:35:03.807491 kernel: GICv3: 0 Extended SPIs implemented May 13 12:35:03.807497 kernel: Root IRQ handler: gic_handle_irq May 13 12:35:03.807503 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI May 13 12:35:03.807510 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 May 13 12:35:03.807516 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 May 13 12:35:03.807523 kernel: ITS [mem 0x08080000-0x0809ffff] May 13 12:35:03.807529 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400e0000 (indirect, esz 8, psz 64K, shr 1) May 13 12:35:03.807536 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400f0000 (flat, esz 8, psz 64K, shr 1) May 13 12:35:03.807542 kernel: GICv3: using LPI property table @0x0000000040100000 May 13 12:35:03.807549 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040110000 May 13 12:35:03.807555 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 13 12:35:03.807562 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 12:35:03.807569 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). May 13 12:35:03.807576 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns May 13 12:35:03.807583 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns May 13 12:35:03.807589 kernel: arm-pv: using stolen time PV May 13 12:35:03.807596 kernel: Console: colour dummy device 80x25 May 13 12:35:03.807602 kernel: ACPI: Core revision 20240827 May 13 12:35:03.807609 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) May 13 12:35:03.807616 kernel: pid_max: default: 32768 minimum: 301 May 13 12:35:03.807622 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima May 13 12:35:03.807630 kernel: landlock: Up and running. May 13 12:35:03.807636 kernel: SELinux: Initializing. May 13 12:35:03.807643 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 13 12:35:03.807650 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 13 12:35:03.807656 kernel: ACPI PPTT: PPTT table found, but unable to locate core 3 (3) May 13 12:35:03.807664 kernel: rcu: Hierarchical SRCU implementation. May 13 12:35:03.807671 kernel: rcu: Max phase no-delay instances is 400. May 13 12:35:03.807677 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level May 13 12:35:03.807684 kernel: Remapping and enabling EFI services. May 13 12:35:03.807692 kernel: smp: Bringing up secondary CPUs ... May 13 12:35:03.807702 kernel: Detected PIPT I-cache on CPU1 May 13 12:35:03.807709 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 May 13 12:35:03.807718 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040120000 May 13 12:35:03.807724 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 12:35:03.807731 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] May 13 12:35:03.807738 kernel: Detected PIPT I-cache on CPU2 May 13 12:35:03.807745 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 May 13 12:35:03.807752 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040130000 May 13 12:35:03.807760 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 12:35:03.807767 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] May 13 12:35:03.807774 kernel: Detected PIPT I-cache on CPU3 May 13 12:35:03.807781 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 May 13 12:35:03.807787 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040140000 May 13 12:35:03.807794 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 12:35:03.807801 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] May 13 12:35:03.807808 kernel: smp: Brought up 1 node, 4 CPUs May 13 12:35:03.807815 kernel: SMP: Total of 4 processors activated. May 13 12:35:03.807823 kernel: CPU: All CPU(s) started at EL1 May 13 12:35:03.807830 kernel: CPU features: detected: 32-bit EL0 Support May 13 12:35:03.807837 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence May 13 12:35:03.807844 kernel: CPU features: detected: Common not Private translations May 13 12:35:03.807850 kernel: CPU features: detected: CRC32 instructions May 13 12:35:03.807857 kernel: CPU features: detected: Enhanced Virtualization Traps May 13 12:35:03.807864 kernel: CPU features: detected: RCpc load-acquire (LDAPR) May 13 12:35:03.807871 kernel: CPU features: detected: LSE atomic instructions May 13 12:35:03.807878 kernel: CPU features: detected: Privileged Access Never May 13 12:35:03.807886 kernel: CPU features: detected: RAS Extension Support May 13 12:35:03.807892 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) May 13 12:35:03.807905 kernel: alternatives: applying system-wide alternatives May 13 12:35:03.807913 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 May 13 12:35:03.807920 kernel: Memory: 2440920K/2572288K available (11072K kernel code, 2276K rwdata, 8932K rodata, 39488K init, 1034K bss, 125600K reserved, 0K cma-reserved) May 13 12:35:03.807927 kernel: devtmpfs: initialized May 13 12:35:03.807934 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 13 12:35:03.807941 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 13 12:35:03.807948 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL May 13 12:35:03.807957 kernel: 0 pages in range for non-PLT usage May 13 12:35:03.807964 kernel: 508528 pages in range for PLT usage May 13 12:35:03.807971 kernel: pinctrl core: initialized pinctrl subsystem May 13 12:35:03.807978 kernel: SMBIOS 3.0.0 present. May 13 12:35:03.807985 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 May 13 12:35:03.807991 kernel: DMI: Memory slots populated: 1/1 May 13 12:35:03.807998 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 13 12:35:03.808005 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations May 13 12:35:03.808012 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations May 13 12:35:03.808020 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations May 13 12:35:03.808027 kernel: audit: initializing netlink subsys (disabled) May 13 12:35:03.808034 kernel: audit: type=2000 audit(0.031:1): state=initialized audit_enabled=0 res=1 May 13 12:35:03.808041 kernel: thermal_sys: Registered thermal governor 'step_wise' May 13 12:35:03.808048 kernel: cpuidle: using governor menu May 13 12:35:03.808055 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. May 13 12:35:03.808062 kernel: ASID allocator initialised with 32768 entries May 13 12:35:03.808069 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 13 12:35:03.808076 kernel: Serial: AMBA PL011 UART driver May 13 12:35:03.808084 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 13 12:35:03.808091 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page May 13 12:35:03.808098 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages May 13 12:35:03.808105 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page May 13 12:35:03.808112 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 13 12:35:03.808119 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page May 13 12:35:03.808126 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages May 13 12:35:03.808132 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page May 13 12:35:03.808139 kernel: ACPI: Added _OSI(Module Device) May 13 12:35:03.808147 kernel: ACPI: Added _OSI(Processor Device) May 13 12:35:03.808154 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 13 12:35:03.808161 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 13 12:35:03.808168 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 13 12:35:03.808175 kernel: ACPI: Interpreter enabled May 13 12:35:03.808182 kernel: ACPI: Using GIC for interrupt routing May 13 12:35:03.808189 kernel: ACPI: MCFG table detected, 1 entries May 13 12:35:03.808195 kernel: ACPI: CPU0 has been hot-added May 13 12:35:03.808206 kernel: ACPI: CPU1 has been hot-added May 13 12:35:03.808214 kernel: ACPI: CPU2 has been hot-added May 13 12:35:03.808221 kernel: ACPI: CPU3 has been hot-added May 13 12:35:03.808228 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA May 13 12:35:03.808235 kernel: printk: legacy console [ttyAMA0] enabled May 13 12:35:03.808242 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 13 12:35:03.808379 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 13 12:35:03.808449 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] May 13 12:35:03.808509 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] May 13 12:35:03.808570 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 May 13 12:35:03.808628 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] May 13 12:35:03.808637 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] May 13 12:35:03.808645 kernel: PCI host bridge to bus 0000:00 May 13 12:35:03.808710 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] May 13 12:35:03.808765 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] May 13 12:35:03.808817 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] May 13 12:35:03.808871 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 13 12:35:03.809007 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint May 13 12:35:03.809084 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint May 13 12:35:03.809145 kernel: pci 0000:00:01.0: BAR 0 [io 0x0000-0x001f] May 13 12:35:03.809213 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff] May 13 12:35:03.809276 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] May 13 12:35:03.809335 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned May 13 12:35:03.809397 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff]: assigned May 13 12:35:03.809461 kernel: pci 0000:00:01.0: BAR 0 [io 0x1000-0x101f]: assigned May 13 12:35:03.809516 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] May 13 12:35:03.809570 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] May 13 12:35:03.809622 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] May 13 12:35:03.809631 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 May 13 12:35:03.809638 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 May 13 12:35:03.809647 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 May 13 12:35:03.809654 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 May 13 12:35:03.809661 kernel: iommu: Default domain type: Translated May 13 12:35:03.809668 kernel: iommu: DMA domain TLB invalidation policy: strict mode May 13 12:35:03.809675 kernel: efivars: Registered efivars operations May 13 12:35:03.809682 kernel: vgaarb: loaded May 13 12:35:03.809688 kernel: clocksource: Switched to clocksource arch_sys_counter May 13 12:35:03.809695 kernel: VFS: Disk quotas dquot_6.6.0 May 13 12:35:03.809702 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 13 12:35:03.809710 kernel: pnp: PnP ACPI init May 13 12:35:03.809775 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved May 13 12:35:03.809785 kernel: pnp: PnP ACPI: found 1 devices May 13 12:35:03.809792 kernel: NET: Registered PF_INET protocol family May 13 12:35:03.809799 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 13 12:35:03.809806 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 13 12:35:03.809813 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 13 12:35:03.809820 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 13 12:35:03.809829 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 13 12:35:03.809836 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 13 12:35:03.809842 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 13 12:35:03.809849 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 13 12:35:03.809856 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 13 12:35:03.809863 kernel: PCI: CLS 0 bytes, default 64 May 13 12:35:03.809870 kernel: kvm [1]: HYP mode not available May 13 12:35:03.809877 kernel: Initialise system trusted keyrings May 13 12:35:03.809883 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 13 12:35:03.809892 kernel: Key type asymmetric registered May 13 12:35:03.809907 kernel: Asymmetric key parser 'x509' registered May 13 12:35:03.809914 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) May 13 12:35:03.809921 kernel: io scheduler mq-deadline registered May 13 12:35:03.809928 kernel: io scheduler kyber registered May 13 12:35:03.809935 kernel: io scheduler bfq registered May 13 12:35:03.809942 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 May 13 12:35:03.809949 kernel: ACPI: button: Power Button [PWRB] May 13 12:35:03.809956 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 May 13 12:35:03.810036 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) May 13 12:35:03.810046 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 13 12:35:03.810052 kernel: thunder_xcv, ver 1.0 May 13 12:35:03.810059 kernel: thunder_bgx, ver 1.0 May 13 12:35:03.810066 kernel: nicpf, ver 1.0 May 13 12:35:03.810073 kernel: nicvf, ver 1.0 May 13 12:35:03.810139 kernel: rtc-efi rtc-efi.0: registered as rtc0 May 13 12:35:03.810195 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-05-13T12:35:03 UTC (1747139703) May 13 12:35:03.810212 kernel: hid: raw HID events driver (C) Jiri Kosina May 13 12:35:03.810219 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available May 13 12:35:03.810226 kernel: watchdog: NMI not fully supported May 13 12:35:03.810233 kernel: watchdog: Hard watchdog permanently disabled May 13 12:35:03.810240 kernel: NET: Registered PF_INET6 protocol family May 13 12:35:03.810246 kernel: Segment Routing with IPv6 May 13 12:35:03.810253 kernel: In-situ OAM (IOAM) with IPv6 May 13 12:35:03.810260 kernel: NET: Registered PF_PACKET protocol family May 13 12:35:03.810267 kernel: Key type dns_resolver registered May 13 12:35:03.810275 kernel: registered taskstats version 1 May 13 12:35:03.810282 kernel: Loading compiled-in X.509 certificates May 13 12:35:03.810289 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.28-flatcar: f8df872077a0531ef71a44c67653908e8a70c520' May 13 12:35:03.810296 kernel: Demotion targets for Node 0: null May 13 12:35:03.810303 kernel: Key type .fscrypt registered May 13 12:35:03.810310 kernel: Key type fscrypt-provisioning registered May 13 12:35:03.810317 kernel: ima: No TPM chip found, activating TPM-bypass! May 13 12:35:03.810323 kernel: ima: Allocated hash algorithm: sha1 May 13 12:35:03.810330 kernel: ima: No architecture policies found May 13 12:35:03.810338 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) May 13 12:35:03.810345 kernel: clk: Disabling unused clocks May 13 12:35:03.810352 kernel: PM: genpd: Disabling unused power domains May 13 12:35:03.810359 kernel: Warning: unable to open an initial console. May 13 12:35:03.810366 kernel: Freeing unused kernel memory: 39488K May 13 12:35:03.810373 kernel: Run /init as init process May 13 12:35:03.810379 kernel: with arguments: May 13 12:35:03.810386 kernel: /init May 13 12:35:03.810393 kernel: with environment: May 13 12:35:03.810401 kernel: HOME=/ May 13 12:35:03.810408 kernel: TERM=linux May 13 12:35:03.810414 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 13 12:35:03.810422 systemd[1]: Successfully made /usr/ read-only. May 13 12:35:03.810432 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 13 12:35:03.810440 systemd[1]: Detected virtualization kvm. May 13 12:35:03.810447 systemd[1]: Detected architecture arm64. May 13 12:35:03.810456 systemd[1]: Running in initrd. May 13 12:35:03.810463 systemd[1]: No hostname configured, using default hostname. May 13 12:35:03.810470 systemd[1]: Hostname set to . May 13 12:35:03.810478 systemd[1]: Initializing machine ID from VM UUID. May 13 12:35:03.810485 systemd[1]: Queued start job for default target initrd.target. May 13 12:35:03.810492 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 13 12:35:03.810500 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 13 12:35:03.810507 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 13 12:35:03.810515 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 13 12:35:03.810524 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 13 12:35:03.810532 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 13 12:35:03.810540 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 13 12:35:03.810548 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 13 12:35:03.810556 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 13 12:35:03.810563 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 13 12:35:03.810572 systemd[1]: Reached target paths.target - Path Units. May 13 12:35:03.810579 systemd[1]: Reached target slices.target - Slice Units. May 13 12:35:03.810587 systemd[1]: Reached target swap.target - Swaps. May 13 12:35:03.810594 systemd[1]: Reached target timers.target - Timer Units. May 13 12:35:03.810602 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 13 12:35:03.810609 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 13 12:35:03.810636 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 13 12:35:03.810643 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 13 12:35:03.810651 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 13 12:35:03.810660 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 13 12:35:03.810668 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 13 12:35:03.810675 systemd[1]: Reached target sockets.target - Socket Units. May 13 12:35:03.810683 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 13 12:35:03.810690 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 13 12:35:03.810697 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 13 12:35:03.810705 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). May 13 12:35:03.810713 systemd[1]: Starting systemd-fsck-usr.service... May 13 12:35:03.810721 systemd[1]: Starting systemd-journald.service - Journal Service... May 13 12:35:03.810729 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 13 12:35:03.810736 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 12:35:03.810743 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 13 12:35:03.810751 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 13 12:35:03.810760 systemd[1]: Finished systemd-fsck-usr.service. May 13 12:35:03.810785 systemd-journald[246]: Collecting audit messages is disabled. May 13 12:35:03.810803 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 13 12:35:03.810811 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 13 12:35:03.810820 systemd-journald[246]: Journal started May 13 12:35:03.810838 systemd-journald[246]: Runtime Journal (/run/log/journal/265399d38287492697e6b00cf3407a7f) is 6M, max 48.5M, 42.4M free. May 13 12:35:03.805552 systemd-modules-load[247]: Inserted module 'overlay' May 13 12:35:03.814543 systemd[1]: Started systemd-journald.service - Journal Service. May 13 12:35:03.817852 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 13 12:35:03.821036 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 13 12:35:03.826624 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 13 12:35:03.826643 kernel: Bridge firewalling registered May 13 12:35:03.825534 systemd-modules-load[247]: Inserted module 'br_netfilter' May 13 12:35:03.829021 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 13 12:35:03.830366 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 13 12:35:03.835752 systemd-tmpfiles[265]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. May 13 12:35:03.836046 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 13 12:35:03.839883 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 13 12:35:03.842856 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 13 12:35:03.846753 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 13 12:35:03.850171 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 13 12:35:03.851293 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 12:35:03.853346 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 13 12:35:03.862484 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 13 12:35:03.889517 systemd-resolved[288]: Positive Trust Anchors: May 13 12:35:03.889533 systemd-resolved[288]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 13 12:35:03.889570 systemd-resolved[288]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 13 12:35:03.894280 systemd-resolved[288]: Defaulting to hostname 'linux'. May 13 12:35:03.900781 dracut-cmdline[291]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=b20e935bbd8772a1b0c6883755acb6e2a52b7a903a0b8e12c8ff59ca86b84928 May 13 12:35:03.895305 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 13 12:35:03.899812 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 13 12:35:03.968943 kernel: SCSI subsystem initialized May 13 12:35:03.974920 kernel: Loading iSCSI transport class v2.0-870. May 13 12:35:03.981922 kernel: iscsi: registered transport (tcp) May 13 12:35:03.993933 kernel: iscsi: registered transport (qla4xxx) May 13 12:35:03.993993 kernel: QLogic iSCSI HBA Driver May 13 12:35:04.009999 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 13 12:35:04.031000 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 13 12:35:04.033110 systemd[1]: Reached target network-pre.target - Preparation for Network. May 13 12:35:04.075230 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 13 12:35:04.077425 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 13 12:35:04.138939 kernel: raid6: neonx8 gen() 15811 MB/s May 13 12:35:04.155919 kernel: raid6: neonx4 gen() 15812 MB/s May 13 12:35:04.172922 kernel: raid6: neonx2 gen() 13201 MB/s May 13 12:35:04.189925 kernel: raid6: neonx1 gen() 10545 MB/s May 13 12:35:04.206928 kernel: raid6: int64x8 gen() 6887 MB/s May 13 12:35:04.223931 kernel: raid6: int64x4 gen() 7362 MB/s May 13 12:35:04.240928 kernel: raid6: int64x2 gen() 6102 MB/s May 13 12:35:04.258006 kernel: raid6: int64x1 gen() 5056 MB/s May 13 12:35:04.258039 kernel: raid6: using algorithm neonx4 gen() 15812 MB/s May 13 12:35:04.275991 kernel: raid6: .... xor() 12394 MB/s, rmw enabled May 13 12:35:04.276026 kernel: raid6: using neon recovery algorithm May 13 12:35:04.281359 kernel: xor: measuring software checksum speed May 13 12:35:04.281379 kernel: 8regs : 21562 MB/sec May 13 12:35:04.282045 kernel: 32regs : 21676 MB/sec May 13 12:35:04.283292 kernel: arm64_neon : 27644 MB/sec May 13 12:35:04.283315 kernel: xor: using function: arm64_neon (27644 MB/sec) May 13 12:35:04.335923 kernel: Btrfs loaded, zoned=no, fsverity=no May 13 12:35:04.342677 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 13 12:35:04.345205 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 13 12:35:04.370059 systemd-udevd[501]: Using default interface naming scheme 'v255'. May 13 12:35:04.374083 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 13 12:35:04.376347 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 13 12:35:04.398167 dracut-pre-trigger[510]: rd.md=0: removing MD RAID activation May 13 12:35:04.418937 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 13 12:35:04.421094 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 13 12:35:04.473935 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 13 12:35:04.476890 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 13 12:35:04.522940 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues May 13 12:35:04.528270 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 13 12:35:04.529991 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 13 12:35:04.530110 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 13 12:35:04.539085 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 13 12:35:04.539105 kernel: GPT:9289727 != 19775487 May 13 12:35:04.539115 kernel: GPT:Alternate GPT header not at the end of the disk. May 13 12:35:04.539123 kernel: GPT:9289727 != 19775487 May 13 12:35:04.539131 kernel: GPT: Use GNU Parted to correct GPT errors. May 13 12:35:04.539140 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 12:35:04.538518 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 13 12:35:04.540716 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 12:35:04.564108 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 13 12:35:04.570884 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 13 12:35:04.572220 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 13 12:35:04.589647 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 13 12:35:04.595791 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 13 12:35:04.596994 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 13 12:35:04.606094 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 13 12:35:04.607294 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 13 12:35:04.609288 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 13 12:35:04.611318 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 13 12:35:04.613917 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 13 12:35:04.615648 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 13 12:35:04.639706 disk-uuid[592]: Primary Header is updated. May 13 12:35:04.639706 disk-uuid[592]: Secondary Entries is updated. May 13 12:35:04.639706 disk-uuid[592]: Secondary Header is updated. May 13 12:35:04.643236 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 12:35:04.645830 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 13 12:35:05.654910 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 12:35:05.654959 disk-uuid[597]: The operation has completed successfully. May 13 12:35:05.677547 systemd[1]: disk-uuid.service: Deactivated successfully. May 13 12:35:05.677643 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 13 12:35:05.703551 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 13 12:35:05.727475 sh[612]: Success May 13 12:35:05.741119 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 13 12:35:05.741160 kernel: device-mapper: uevent: version 1.0.3 May 13 12:35:05.744928 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev May 13 12:35:05.756819 kernel: device-mapper: verity: sha256 using shash "sha256-ce" May 13 12:35:05.781342 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 13 12:35:05.784020 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 13 12:35:05.795967 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 13 12:35:05.804562 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' May 13 12:35:05.804607 kernel: BTRFS: device fsid 5ded7f9d-c045-4eec-a161-ff9af5b01d28 devid 1 transid 40 /dev/mapper/usr (253:0) scanned by mount (624) May 13 12:35:05.805950 kernel: BTRFS info (device dm-0): first mount of filesystem 5ded7f9d-c045-4eec-a161-ff9af5b01d28 May 13 12:35:05.805975 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm May 13 12:35:05.807480 kernel: BTRFS info (device dm-0): using free-space-tree May 13 12:35:05.810625 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 13 12:35:05.811842 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. May 13 12:35:05.813270 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 13 12:35:05.813958 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 13 12:35:05.815488 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 13 12:35:05.843004 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 (254:6) scanned by mount (657) May 13 12:35:05.843044 kernel: BTRFS info (device vda6): first mount of filesystem 79dad06b-b9d3-4cc5-b052-ebf459e9d4d7 May 13 12:35:05.845552 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 13 12:35:05.845593 kernel: BTRFS info (device vda6): using free-space-tree May 13 12:35:05.854946 kernel: BTRFS info (device vda6): last unmount of filesystem 79dad06b-b9d3-4cc5-b052-ebf459e9d4d7 May 13 12:35:05.856396 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 13 12:35:05.858350 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 13 12:35:05.923705 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 13 12:35:05.927349 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 13 12:35:05.969812 systemd-networkd[799]: lo: Link UP May 13 12:35:05.969824 systemd-networkd[799]: lo: Gained carrier May 13 12:35:05.970541 systemd-networkd[799]: Enumeration completed May 13 12:35:05.970635 systemd[1]: Started systemd-networkd.service - Network Configuration. May 13 12:35:05.971064 systemd-networkd[799]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 12:35:05.971068 systemd-networkd[799]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 13 12:35:05.971648 systemd-networkd[799]: eth0: Link UP May 13 12:35:05.971651 systemd-networkd[799]: eth0: Gained carrier May 13 12:35:05.971659 systemd-networkd[799]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 12:35:05.972031 systemd[1]: Reached target network.target - Network. May 13 12:35:05.989383 ignition[703]: Ignition 2.21.0 May 13 12:35:05.989398 ignition[703]: Stage: fetch-offline May 13 12:35:05.989426 ignition[703]: no configs at "/usr/lib/ignition/base.d" May 13 12:35:05.990949 systemd-networkd[799]: eth0: DHCPv4 address 10.0.0.39/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 13 12:35:05.989433 ignition[703]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 12:35:05.989620 ignition[703]: parsed url from cmdline: "" May 13 12:35:05.989623 ignition[703]: no config URL provided May 13 12:35:05.989627 ignition[703]: reading system config file "/usr/lib/ignition/user.ign" May 13 12:35:05.989633 ignition[703]: no config at "/usr/lib/ignition/user.ign" May 13 12:35:05.989651 ignition[703]: op(1): [started] loading QEMU firmware config module May 13 12:35:05.989655 ignition[703]: op(1): executing: "modprobe" "qemu_fw_cfg" May 13 12:35:05.997946 ignition[703]: op(1): [finished] loading QEMU firmware config module May 13 12:35:06.035002 ignition[703]: parsing config with SHA512: b240f3a02f37ea16a043d03ea0e9c702a1cd55aac17348f9960f043dc74277cac1b79984f341499824f9997b45a611e21a29548af034b0815dd2fe347cd7f6bc May 13 12:35:06.040876 unknown[703]: fetched base config from "system" May 13 12:35:06.041709 unknown[703]: fetched user config from "qemu" May 13 12:35:06.042104 ignition[703]: fetch-offline: fetch-offline passed May 13 12:35:06.042161 ignition[703]: Ignition finished successfully May 13 12:35:06.043942 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 13 12:35:06.045662 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 13 12:35:06.046448 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 13 12:35:06.087237 ignition[814]: Ignition 2.21.0 May 13 12:35:06.087256 ignition[814]: Stage: kargs May 13 12:35:06.087381 ignition[814]: no configs at "/usr/lib/ignition/base.d" May 13 12:35:06.087388 ignition[814]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 12:35:06.089090 ignition[814]: kargs: kargs passed May 13 12:35:06.089145 ignition[814]: Ignition finished successfully May 13 12:35:06.091655 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 13 12:35:06.094180 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 13 12:35:06.132950 ignition[822]: Ignition 2.21.0 May 13 12:35:06.132961 ignition[822]: Stage: disks May 13 12:35:06.133180 ignition[822]: no configs at "/usr/lib/ignition/base.d" May 13 12:35:06.133198 ignition[822]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 12:35:06.134291 ignition[822]: disks: disks passed May 13 12:35:06.134340 ignition[822]: Ignition finished successfully May 13 12:35:06.138222 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 13 12:35:06.139517 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 13 12:35:06.140832 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 13 12:35:06.142752 systemd[1]: Reached target local-fs.target - Local File Systems. May 13 12:35:06.144548 systemd[1]: Reached target sysinit.target - System Initialization. May 13 12:35:06.146176 systemd[1]: Reached target basic.target - Basic System. May 13 12:35:06.148542 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 13 12:35:06.188636 systemd-fsck[832]: ROOT: clean, 15/553520 files, 52789/553472 blocks May 13 12:35:06.343747 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 13 12:35:06.346409 systemd[1]: Mounting sysroot.mount - /sysroot... May 13 12:35:06.409932 kernel: EXT4-fs (vda9): mounted filesystem 02660b30-6941-48da-9f0e-501a024e2c48 r/w with ordered data mode. Quota mode: none. May 13 12:35:06.410632 systemd[1]: Mounted sysroot.mount - /sysroot. May 13 12:35:06.411812 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 13 12:35:06.413990 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 13 12:35:06.415507 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 13 12:35:06.416400 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 13 12:35:06.416440 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 13 12:35:06.416460 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 13 12:35:06.433093 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 13 12:35:06.435385 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 13 12:35:06.441065 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 (254:6) scanned by mount (840) May 13 12:35:06.441108 kernel: BTRFS info (device vda6): first mount of filesystem 79dad06b-b9d3-4cc5-b052-ebf459e9d4d7 May 13 12:35:06.441127 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 13 12:35:06.441146 kernel: BTRFS info (device vda6): using free-space-tree May 13 12:35:06.444715 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 13 12:35:06.478063 initrd-setup-root[864]: cut: /sysroot/etc/passwd: No such file or directory May 13 12:35:06.481408 initrd-setup-root[871]: cut: /sysroot/etc/group: No such file or directory May 13 12:35:06.484067 initrd-setup-root[878]: cut: /sysroot/etc/shadow: No such file or directory May 13 12:35:06.487526 initrd-setup-root[885]: cut: /sysroot/etc/gshadow: No such file or directory May 13 12:35:06.555638 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 13 12:35:06.557612 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 13 12:35:06.559108 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 13 12:35:06.577116 kernel: BTRFS info (device vda6): last unmount of filesystem 79dad06b-b9d3-4cc5-b052-ebf459e9d4d7 May 13 12:35:06.588936 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 13 12:35:06.594906 ignition[954]: INFO : Ignition 2.21.0 May 13 12:35:06.594906 ignition[954]: INFO : Stage: mount May 13 12:35:06.596394 ignition[954]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 12:35:06.596394 ignition[954]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 12:35:06.596394 ignition[954]: INFO : mount: mount passed May 13 12:35:06.596394 ignition[954]: INFO : Ignition finished successfully May 13 12:35:06.597697 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 13 12:35:06.600169 systemd[1]: Starting ignition-files.service - Ignition (files)... May 13 12:35:06.803336 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 13 12:35:06.804834 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 13 12:35:06.830913 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 (254:6) scanned by mount (966) May 13 12:35:06.830951 kernel: BTRFS info (device vda6): first mount of filesystem 79dad06b-b9d3-4cc5-b052-ebf459e9d4d7 May 13 12:35:06.830961 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 13 12:35:06.832504 kernel: BTRFS info (device vda6): using free-space-tree May 13 12:35:06.835011 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 13 12:35:06.856815 ignition[983]: INFO : Ignition 2.21.0 May 13 12:35:06.856815 ignition[983]: INFO : Stage: files May 13 12:35:06.858345 ignition[983]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 12:35:06.858345 ignition[983]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 12:35:06.860423 ignition[983]: DEBUG : files: compiled without relabeling support, skipping May 13 12:35:06.861620 ignition[983]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 13 12:35:06.861620 ignition[983]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 13 12:35:06.864864 ignition[983]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 13 12:35:06.866172 ignition[983]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 13 12:35:06.866172 ignition[983]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 13 12:35:06.865342 unknown[983]: wrote ssh authorized keys file for user: core May 13 12:35:06.869767 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" May 13 12:35:06.869767 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 May 13 12:35:06.916242 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 13 12:35:06.989157 systemd-networkd[799]: eth0: Gained IPv6LL May 13 12:35:07.113619 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" May 13 12:35:07.113619 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" May 13 12:35:07.117530 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" May 13 12:35:07.117530 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" May 13 12:35:07.117530 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" May 13 12:35:07.117530 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 13 12:35:07.117530 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 13 12:35:07.117530 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 13 12:35:07.117530 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 13 12:35:07.117530 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" May 13 12:35:07.117530 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 13 12:35:07.117530 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" May 13 12:35:07.134140 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" May 13 12:35:07.134140 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" May 13 12:35:07.134140 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-arm64.raw: attempt #1 May 13 12:35:07.512913 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK May 13 12:35:08.329663 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" May 13 12:35:08.332310 ignition[983]: INFO : files: op(b): [started] processing unit "prepare-helm.service" May 13 12:35:08.332310 ignition[983]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 13 12:35:08.335284 ignition[983]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 13 12:35:08.335284 ignition[983]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" May 13 12:35:08.335284 ignition[983]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" May 13 12:35:08.335284 ignition[983]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 13 12:35:08.335284 ignition[983]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 13 12:35:08.335284 ignition[983]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" May 13 12:35:08.335284 ignition[983]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" May 13 12:35:08.354775 ignition[983]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" May 13 12:35:08.358006 ignition[983]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 13 12:35:08.360682 ignition[983]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" May 13 12:35:08.360682 ignition[983]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" May 13 12:35:08.360682 ignition[983]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" May 13 12:35:08.360682 ignition[983]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" May 13 12:35:08.360682 ignition[983]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" May 13 12:35:08.360682 ignition[983]: INFO : files: files passed May 13 12:35:08.360682 ignition[983]: INFO : Ignition finished successfully May 13 12:35:08.362609 systemd[1]: Finished ignition-files.service - Ignition (files). May 13 12:35:08.365124 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 13 12:35:08.366908 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 13 12:35:08.377888 systemd[1]: ignition-quench.service: Deactivated successfully. May 13 12:35:08.378031 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 13 12:35:08.381290 initrd-setup-root-after-ignition[1012]: grep: /sysroot/oem/oem-release: No such file or directory May 13 12:35:08.382590 initrd-setup-root-after-ignition[1014]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 13 12:35:08.382590 initrd-setup-root-after-ignition[1014]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 13 12:35:08.386477 initrd-setup-root-after-ignition[1018]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 13 12:35:08.383645 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 13 12:35:08.385498 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 13 12:35:08.388035 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 13 12:35:08.424733 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 13 12:35:08.424864 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 13 12:35:08.427010 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 13 12:35:08.428832 systemd[1]: Reached target initrd.target - Initrd Default Target. May 13 12:35:08.430675 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 13 12:35:08.431429 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 13 12:35:08.459193 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 13 12:35:08.461525 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 13 12:35:08.479624 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 13 12:35:08.481771 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 13 12:35:08.482965 systemd[1]: Stopped target timers.target - Timer Units. May 13 12:35:08.484699 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 13 12:35:08.484819 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 13 12:35:08.487257 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 13 12:35:08.489166 systemd[1]: Stopped target basic.target - Basic System. May 13 12:35:08.490910 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 13 12:35:08.492513 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 13 12:35:08.494360 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 13 12:35:08.496083 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. May 13 12:35:08.497909 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 13 12:35:08.499744 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 13 12:35:08.501563 systemd[1]: Stopped target sysinit.target - System Initialization. May 13 12:35:08.503532 systemd[1]: Stopped target local-fs.target - Local File Systems. May 13 12:35:08.505215 systemd[1]: Stopped target swap.target - Swaps. May 13 12:35:08.506698 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 13 12:35:08.506821 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 13 12:35:08.509103 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 13 12:35:08.511004 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 13 12:35:08.512919 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 13 12:35:08.513988 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 13 12:35:08.515782 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 13 12:35:08.515911 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 13 12:35:08.518499 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 13 12:35:08.518672 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 13 12:35:08.520514 systemd[1]: Stopped target paths.target - Path Units. May 13 12:35:08.521873 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 13 12:35:08.524951 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 13 12:35:08.526277 systemd[1]: Stopped target slices.target - Slice Units. May 13 12:35:08.528096 systemd[1]: Stopped target sockets.target - Socket Units. May 13 12:35:08.529588 systemd[1]: iscsid.socket: Deactivated successfully. May 13 12:35:08.529713 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 13 12:35:08.531094 systemd[1]: iscsiuio.socket: Deactivated successfully. May 13 12:35:08.531222 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 13 12:35:08.532626 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 13 12:35:08.532781 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 13 12:35:08.534371 systemd[1]: ignition-files.service: Deactivated successfully. May 13 12:35:08.534515 systemd[1]: Stopped ignition-files.service - Ignition (files). May 13 12:35:08.536738 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 13 12:35:08.539091 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 13 12:35:08.540190 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 13 12:35:08.540360 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 13 12:35:08.542111 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 13 12:35:08.542261 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 13 12:35:08.549357 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 13 12:35:08.550448 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 13 12:35:08.553800 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 13 12:35:08.558118 systemd[1]: sysroot-boot.service: Deactivated successfully. May 13 12:35:08.558227 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 13 12:35:08.564150 ignition[1038]: INFO : Ignition 2.21.0 May 13 12:35:08.564150 ignition[1038]: INFO : Stage: umount May 13 12:35:08.565797 ignition[1038]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 12:35:08.565797 ignition[1038]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 12:35:08.565797 ignition[1038]: INFO : umount: umount passed May 13 12:35:08.565797 ignition[1038]: INFO : Ignition finished successfully May 13 12:35:08.567371 systemd[1]: ignition-mount.service: Deactivated successfully. May 13 12:35:08.567469 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 13 12:35:08.569154 systemd[1]: Stopped target network.target - Network. May 13 12:35:08.571258 systemd[1]: ignition-disks.service: Deactivated successfully. May 13 12:35:08.571318 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 13 12:35:08.573964 systemd[1]: ignition-kargs.service: Deactivated successfully. May 13 12:35:08.574013 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 13 12:35:08.575757 systemd[1]: ignition-setup.service: Deactivated successfully. May 13 12:35:08.575807 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 13 12:35:08.577342 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 13 12:35:08.577385 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 13 12:35:08.579113 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 13 12:35:08.579164 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 13 12:35:08.580999 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 13 12:35:08.582664 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 13 12:35:08.592110 systemd[1]: systemd-resolved.service: Deactivated successfully. May 13 12:35:08.592240 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 13 12:35:08.597358 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 13 12:35:08.597551 systemd[1]: systemd-networkd.service: Deactivated successfully. May 13 12:35:08.597640 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 13 12:35:08.601892 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 13 12:35:08.602413 systemd[1]: Stopped target network-pre.target - Preparation for Network. May 13 12:35:08.603982 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 13 12:35:08.604017 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 13 12:35:08.607242 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 13 12:35:08.608096 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 13 12:35:08.608156 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 13 12:35:08.610265 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 13 12:35:08.610315 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 13 12:35:08.612560 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 13 12:35:08.612604 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 13 12:35:08.614975 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 13 12:35:08.615025 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 13 12:35:08.620163 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 13 12:35:08.622824 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 13 12:35:08.622881 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 13 12:35:08.640636 systemd[1]: systemd-udevd.service: Deactivated successfully. May 13 12:35:08.640782 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 13 12:35:08.642950 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 13 12:35:08.642990 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 13 12:35:08.644910 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 13 12:35:08.644953 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 13 12:35:08.646660 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 13 12:35:08.646710 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 13 12:35:08.649278 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 13 12:35:08.649327 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 13 12:35:08.651869 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 13 12:35:08.651938 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 12:35:08.655475 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 13 12:35:08.656540 systemd[1]: systemd-network-generator.service: Deactivated successfully. May 13 12:35:08.656596 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. May 13 12:35:08.659599 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 13 12:35:08.659644 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 13 12:35:08.662827 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. May 13 12:35:08.662867 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 13 12:35:08.666112 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 13 12:35:08.666152 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 13 12:35:08.668330 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 13 12:35:08.668372 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 13 12:35:08.672444 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. May 13 12:35:08.672491 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. May 13 12:35:08.672518 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. May 13 12:35:08.672545 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 13 12:35:08.672810 systemd[1]: network-cleanup.service: Deactivated successfully. May 13 12:35:08.672886 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 13 12:35:08.676182 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 13 12:35:08.676253 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 13 12:35:08.678619 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 13 12:35:08.680398 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 13 12:35:08.693771 systemd[1]: Switching root. May 13 12:35:08.723365 systemd-journald[246]: Journal stopped May 13 12:35:09.502782 systemd-journald[246]: Received SIGTERM from PID 1 (systemd). May 13 12:35:09.502833 kernel: SELinux: policy capability network_peer_controls=1 May 13 12:35:09.502844 kernel: SELinux: policy capability open_perms=1 May 13 12:35:09.502854 kernel: SELinux: policy capability extended_socket_class=1 May 13 12:35:09.502866 kernel: SELinux: policy capability always_check_network=0 May 13 12:35:09.502877 kernel: SELinux: policy capability cgroup_seclabel=1 May 13 12:35:09.502890 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 13 12:35:09.502941 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 13 12:35:09.502953 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 13 12:35:09.502962 kernel: SELinux: policy capability userspace_initial_context=0 May 13 12:35:09.502974 kernel: audit: type=1403 audit(1747139708.891:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 13 12:35:09.502985 systemd[1]: Successfully loaded SELinux policy in 48.728ms. May 13 12:35:09.503002 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 9.614ms. May 13 12:35:09.503013 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 13 12:35:09.503023 systemd[1]: Detected virtualization kvm. May 13 12:35:09.503033 systemd[1]: Detected architecture arm64. May 13 12:35:09.503043 systemd[1]: Detected first boot. May 13 12:35:09.503053 systemd[1]: Initializing machine ID from VM UUID. May 13 12:35:09.503063 zram_generator::config[1083]: No configuration found. May 13 12:35:09.503076 kernel: NET: Registered PF_VSOCK protocol family May 13 12:35:09.503086 systemd[1]: Populated /etc with preset unit settings. May 13 12:35:09.503100 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 13 12:35:09.503110 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 13 12:35:09.503120 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 13 12:35:09.503130 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 13 12:35:09.503140 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 13 12:35:09.503156 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 13 12:35:09.503177 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 13 12:35:09.503187 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 13 12:35:09.503197 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 13 12:35:09.503207 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 13 12:35:09.503217 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 13 12:35:09.503226 systemd[1]: Created slice user.slice - User and Session Slice. May 13 12:35:09.503236 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 13 12:35:09.503246 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 13 12:35:09.503256 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 13 12:35:09.503268 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 13 12:35:09.503278 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 13 12:35:09.503288 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 13 12:35:09.503298 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... May 13 12:35:09.503308 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 13 12:35:09.503318 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 13 12:35:09.503328 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 13 12:35:09.503339 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 13 12:35:09.503349 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 13 12:35:09.503359 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 13 12:35:09.503369 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 13 12:35:09.503378 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 13 12:35:09.503388 systemd[1]: Reached target slices.target - Slice Units. May 13 12:35:09.503398 systemd[1]: Reached target swap.target - Swaps. May 13 12:35:09.503408 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 13 12:35:09.503419 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 13 12:35:09.503430 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 13 12:35:09.503440 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 13 12:35:09.503450 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 13 12:35:09.503461 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 13 12:35:09.503471 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 13 12:35:09.503480 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 13 12:35:09.503491 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 13 12:35:09.503501 systemd[1]: Mounting media.mount - External Media Directory... May 13 12:35:09.503511 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 13 12:35:09.503523 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 13 12:35:09.503534 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 13 12:35:09.503544 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 13 12:35:09.503553 systemd[1]: Reached target machines.target - Containers. May 13 12:35:09.503564 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 13 12:35:09.503574 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 13 12:35:09.503585 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 13 12:35:09.503595 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 13 12:35:09.503605 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 13 12:35:09.503620 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 13 12:35:09.503629 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 13 12:35:09.503639 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 13 12:35:09.503650 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 13 12:35:09.503660 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 13 12:35:09.503670 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 13 12:35:09.503680 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 13 12:35:09.503691 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 13 12:35:09.503702 systemd[1]: Stopped systemd-fsck-usr.service. May 13 12:35:09.503713 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 13 12:35:09.503723 systemd[1]: Starting systemd-journald.service - Journal Service... May 13 12:35:09.503733 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 13 12:35:09.503742 kernel: fuse: init (API version 7.41) May 13 12:35:09.503752 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 13 12:35:09.503762 kernel: loop: module loaded May 13 12:35:09.503771 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 13 12:35:09.503783 kernel: ACPI: bus type drm_connector registered May 13 12:35:09.503794 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 13 12:35:09.503805 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 13 12:35:09.503815 systemd[1]: verity-setup.service: Deactivated successfully. May 13 12:35:09.503825 systemd[1]: Stopped verity-setup.service. May 13 12:35:09.503834 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 13 12:35:09.503845 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 13 12:35:09.503855 systemd[1]: Mounted media.mount - External Media Directory. May 13 12:35:09.503867 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 13 12:35:09.503877 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 13 12:35:09.503887 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 13 12:35:09.503907 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 13 12:35:09.503919 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 13 12:35:09.503930 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 13 12:35:09.503941 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 13 12:35:09.503975 systemd-journald[1151]: Collecting audit messages is disabled. May 13 12:35:09.503997 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 12:35:09.504008 systemd-journald[1151]: Journal started May 13 12:35:09.504031 systemd-journald[1151]: Runtime Journal (/run/log/journal/265399d38287492697e6b00cf3407a7f) is 6M, max 48.5M, 42.4M free. May 13 12:35:09.248979 systemd[1]: Queued start job for default target multi-user.target. May 13 12:35:09.272833 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 13 12:35:09.273232 systemd[1]: systemd-journald.service: Deactivated successfully. May 13 12:35:09.505138 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 13 12:35:09.508241 systemd[1]: Started systemd-journald.service - Journal Service. May 13 12:35:09.509032 systemd[1]: modprobe@drm.service: Deactivated successfully. May 13 12:35:09.509218 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 13 12:35:09.510512 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 12:35:09.510679 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 13 12:35:09.512159 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 13 12:35:09.512337 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 13 12:35:09.513774 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 12:35:09.513980 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 13 12:35:09.515357 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 13 12:35:09.516801 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 13 12:35:09.518516 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 13 12:35:09.520135 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 13 12:35:09.533266 systemd[1]: Reached target network-pre.target - Preparation for Network. May 13 12:35:09.535821 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 13 12:35:09.537990 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 13 12:35:09.539119 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 13 12:35:09.539152 systemd[1]: Reached target local-fs.target - Local File Systems. May 13 12:35:09.541106 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 13 12:35:09.547691 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 13 12:35:09.549092 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 12:35:09.550424 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 13 12:35:09.552500 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 13 12:35:09.553739 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 12:35:09.556406 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 13 12:35:09.557563 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 13 12:35:09.560062 systemd-journald[1151]: Time spent on flushing to /var/log/journal/265399d38287492697e6b00cf3407a7f is 26.041ms for 886 entries. May 13 12:35:09.560062 systemd-journald[1151]: System Journal (/var/log/journal/265399d38287492697e6b00cf3407a7f) is 8M, max 195.6M, 187.6M free. May 13 12:35:09.600126 systemd-journald[1151]: Received client request to flush runtime journal. May 13 12:35:09.600192 kernel: loop0: detected capacity change from 0 to 107312 May 13 12:35:09.560058 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 13 12:35:09.563473 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 13 12:35:09.570033 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 13 12:35:09.572703 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 13 12:35:09.574959 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 13 12:35:09.577203 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 13 12:35:09.588480 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 13 12:35:09.592041 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 13 12:35:09.593570 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 13 12:35:09.597286 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 13 12:35:09.603945 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 13 12:35:09.609125 systemd-tmpfiles[1200]: ACLs are not supported, ignoring. May 13 12:35:09.610959 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 13 12:35:09.609135 systemd-tmpfiles[1200]: ACLs are not supported, ignoring. May 13 12:35:09.613671 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 13 12:35:09.616827 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 13 12:35:09.636093 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 13 12:35:09.636913 kernel: loop1: detected capacity change from 0 to 201592 May 13 12:35:09.656146 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 13 12:35:09.658644 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 13 12:35:09.663868 kernel: loop2: detected capacity change from 0 to 138376 May 13 12:35:09.678200 systemd-tmpfiles[1220]: ACLs are not supported, ignoring. May 13 12:35:09.678476 systemd-tmpfiles[1220]: ACLs are not supported, ignoring. May 13 12:35:09.682449 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 13 12:35:09.699959 kernel: loop3: detected capacity change from 0 to 107312 May 13 12:35:09.704924 kernel: loop4: detected capacity change from 0 to 201592 May 13 12:35:09.712964 kernel: loop5: detected capacity change from 0 to 138376 May 13 12:35:09.718852 (sd-merge)[1224]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. May 13 12:35:09.719243 (sd-merge)[1224]: Merged extensions into '/usr'. May 13 12:35:09.722988 systemd[1]: Reload requested from client PID 1199 ('systemd-sysext') (unit systemd-sysext.service)... May 13 12:35:09.723009 systemd[1]: Reloading... May 13 12:35:09.785927 zram_generator::config[1249]: No configuration found. May 13 12:35:09.826973 ldconfig[1194]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 13 12:35:09.864829 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 12:35:09.926692 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 13 12:35:09.927133 systemd[1]: Reloading finished in 203 ms. May 13 12:35:09.955127 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 13 12:35:09.956608 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 13 12:35:09.971198 systemd[1]: Starting ensure-sysext.service... May 13 12:35:09.972893 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 13 12:35:09.986709 systemd[1]: Reload requested from client PID 1284 ('systemctl') (unit ensure-sysext.service)... May 13 12:35:09.986727 systemd[1]: Reloading... May 13 12:35:09.995985 systemd-tmpfiles[1285]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. May 13 12:35:09.996014 systemd-tmpfiles[1285]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. May 13 12:35:09.996265 systemd-tmpfiles[1285]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 13 12:35:09.996444 systemd-tmpfiles[1285]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 13 12:35:09.997060 systemd-tmpfiles[1285]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 13 12:35:09.997279 systemd-tmpfiles[1285]: ACLs are not supported, ignoring. May 13 12:35:09.997330 systemd-tmpfiles[1285]: ACLs are not supported, ignoring. May 13 12:35:10.000109 systemd-tmpfiles[1285]: Detected autofs mount point /boot during canonicalization of boot. May 13 12:35:10.000122 systemd-tmpfiles[1285]: Skipping /boot May 13 12:35:10.009793 systemd-tmpfiles[1285]: Detected autofs mount point /boot during canonicalization of boot. May 13 12:35:10.009815 systemd-tmpfiles[1285]: Skipping /boot May 13 12:35:10.038010 zram_generator::config[1313]: No configuration found. May 13 12:35:10.099228 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 12:35:10.160208 systemd[1]: Reloading finished in 173 ms. May 13 12:35:10.182974 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 13 12:35:10.189924 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 13 12:35:10.200984 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 13 12:35:10.203542 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 13 12:35:10.210634 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 13 12:35:10.213463 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 13 12:35:10.221058 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 13 12:35:10.225460 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 13 12:35:10.246891 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 13 12:35:10.248680 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 13 12:35:10.254823 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 13 12:35:10.257108 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 13 12:35:10.259352 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 13 12:35:10.268468 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 13 12:35:10.269637 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 12:35:10.269836 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 13 12:35:10.272270 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 13 12:35:10.275208 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 13 12:35:10.277065 systemd-udevd[1353]: Using default interface naming scheme 'v255'. May 13 12:35:10.279804 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 12:35:10.280116 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 13 12:35:10.281816 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 12:35:10.281970 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 13 12:35:10.283852 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 12:35:10.284025 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 13 12:35:10.285688 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 13 12:35:10.291066 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 13 12:35:10.294735 augenrules[1383]: No rules May 13 12:35:10.295807 systemd[1]: audit-rules.service: Deactivated successfully. May 13 12:35:10.296250 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 13 12:35:10.299419 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 13 12:35:10.301464 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 13 12:35:10.303615 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 13 12:35:10.309845 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 13 12:35:10.312682 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 13 12:35:10.314762 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 13 12:35:10.315931 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 12:35:10.316069 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 13 12:35:10.319206 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 13 12:35:10.322006 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 13 12:35:10.326684 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 12:35:10.326907 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 13 12:35:10.330358 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 12:35:10.330607 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 13 12:35:10.333612 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 12:35:10.333783 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 13 12:35:10.339813 systemd[1]: Finished ensure-sysext.service. May 13 12:35:10.345673 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 13 12:35:10.346958 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 13 12:35:10.349990 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 13 12:35:10.351418 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 12:35:10.351542 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 13 12:35:10.351581 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 12:35:10.351626 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 13 12:35:10.358827 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 13 12:35:10.360477 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 13 12:35:10.362642 systemd[1]: modprobe@drm.service: Deactivated successfully. May 13 12:35:10.362876 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 13 12:35:10.384970 augenrules[1434]: /sbin/augenrules: No change May 13 12:35:10.386996 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. May 13 12:35:10.395336 augenrules[1458]: No rules May 13 12:35:10.397715 systemd[1]: audit-rules.service: Deactivated successfully. May 13 12:35:10.400014 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 13 12:35:10.426056 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 13 12:35:10.430055 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 13 12:35:10.455298 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 13 12:35:10.485352 systemd-networkd[1421]: lo: Link UP May 13 12:35:10.485360 systemd-networkd[1421]: lo: Gained carrier May 13 12:35:10.486707 systemd-networkd[1421]: Enumeration completed May 13 12:35:10.486824 systemd[1]: Started systemd-networkd.service - Network Configuration. May 13 12:35:10.487197 systemd-networkd[1421]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 12:35:10.487201 systemd-networkd[1421]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 13 12:35:10.487815 systemd-networkd[1421]: eth0: Link UP May 13 12:35:10.487938 systemd-networkd[1421]: eth0: Gained carrier May 13 12:35:10.487953 systemd-networkd[1421]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 12:35:10.491273 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 13 12:35:10.494031 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 13 12:35:10.498670 systemd-resolved[1352]: Positive Trust Anchors: May 13 12:35:10.498687 systemd-resolved[1352]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 13 12:35:10.498719 systemd-resolved[1352]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 13 12:35:10.502691 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 13 12:35:10.503985 systemd[1]: Reached target time-set.target - System Time Set. May 13 12:35:10.507969 systemd-networkd[1421]: eth0: DHCPv4 address 10.0.0.39/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 13 12:35:10.508596 systemd-timesyncd[1436]: Network configuration changed, trying to establish connection. May 13 12:35:10.509280 systemd-resolved[1352]: Defaulting to hostname 'linux'. May 13 12:35:10.509893 systemd-timesyncd[1436]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 13 12:35:10.509961 systemd-timesyncd[1436]: Initial clock synchronization to Tue 2025-05-13 12:35:10.561168 UTC. May 13 12:35:10.516245 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 13 12:35:10.519931 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 13 12:35:10.522613 systemd[1]: Reached target network.target - Network. May 13 12:35:10.523521 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 13 12:35:10.525226 systemd[1]: Reached target sysinit.target - System Initialization. May 13 12:35:10.526874 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 13 12:35:10.528745 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 13 12:35:10.530122 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 13 12:35:10.533170 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 13 12:35:10.534342 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 13 12:35:10.535572 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 13 12:35:10.535601 systemd[1]: Reached target paths.target - Path Units. May 13 12:35:10.536484 systemd[1]: Reached target timers.target - Timer Units. May 13 12:35:10.538340 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 13 12:35:10.540812 systemd[1]: Starting docker.socket - Docker Socket for the API... May 13 12:35:10.545129 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 13 12:35:10.546809 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 13 12:35:10.548308 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 13 12:35:10.551503 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 13 12:35:10.553290 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 13 12:35:10.555359 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 13 12:35:10.561684 systemd[1]: Reached target sockets.target - Socket Units. May 13 12:35:10.562845 systemd[1]: Reached target basic.target - Basic System. May 13 12:35:10.563939 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 13 12:35:10.564026 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 13 12:35:10.565045 systemd[1]: Starting containerd.service - containerd container runtime... May 13 12:35:10.567138 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 13 12:35:10.568982 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 13 12:35:10.576776 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 13 12:35:10.578762 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 13 12:35:10.579770 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 13 12:35:10.580810 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 13 12:35:10.585074 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 13 12:35:10.586275 jq[1494]: false May 13 12:35:10.587018 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 13 12:35:10.589055 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 13 12:35:10.592688 systemd[1]: Starting systemd-logind.service - User Login Management... May 13 12:35:10.594608 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 12:35:10.596549 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 13 12:35:10.597771 extend-filesystems[1495]: Found loop3 May 13 12:35:10.597771 extend-filesystems[1495]: Found loop4 May 13 12:35:10.597771 extend-filesystems[1495]: Found loop5 May 13 12:35:10.597771 extend-filesystems[1495]: Found vda May 13 12:35:10.597771 extend-filesystems[1495]: Found vda1 May 13 12:35:10.606529 extend-filesystems[1495]: Found vda2 May 13 12:35:10.606529 extend-filesystems[1495]: Found vda3 May 13 12:35:10.606529 extend-filesystems[1495]: Found usr May 13 12:35:10.606529 extend-filesystems[1495]: Found vda4 May 13 12:35:10.606529 extend-filesystems[1495]: Found vda6 May 13 12:35:10.606529 extend-filesystems[1495]: Found vda7 May 13 12:35:10.606529 extend-filesystems[1495]: Found vda9 May 13 12:35:10.606529 extend-filesystems[1495]: Checking size of /dev/vda9 May 13 12:35:10.598037 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 13 12:35:10.598617 systemd[1]: Starting update-engine.service - Update Engine... May 13 12:35:10.600429 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 13 12:35:10.604312 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 13 12:35:10.607968 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 13 12:35:10.613985 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 13 12:35:10.616840 jq[1509]: true May 13 12:35:10.614643 systemd[1]: motdgen.service: Deactivated successfully. May 13 12:35:10.616043 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 13 12:35:10.619169 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 13 12:35:10.619335 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 13 12:35:10.636598 extend-filesystems[1495]: Resized partition /dev/vda9 May 13 12:35:10.644845 (ntainerd)[1521]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 13 12:35:10.655437 jq[1520]: true May 13 12:35:10.663009 extend-filesystems[1531]: resize2fs 1.47.2 (1-Jan-2025) May 13 12:35:10.673678 tar[1518]: linux-arm64/LICENSE May 13 12:35:10.673938 tar[1518]: linux-arm64/helm May 13 12:35:10.681437 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 13 12:35:10.701927 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 13 12:35:10.715282 update_engine[1508]: I20250513 12:35:10.707578 1508 main.cc:92] Flatcar Update Engine starting May 13 12:35:10.715282 update_engine[1508]: I20250513 12:35:10.712738 1508 update_check_scheduler.cc:74] Next update check in 4m54s May 13 12:35:10.709715 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 13 12:35:10.709253 dbus-daemon[1492]: [system] SELinux support is enabled May 13 12:35:10.715375 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 13 12:35:10.715397 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 13 12:35:10.716374 systemd-logind[1502]: Watching system buttons on /dev/input/event0 (Power Button) May 13 12:35:10.716961 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 13 12:35:10.716984 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 13 12:35:10.718721 systemd[1]: Started update-engine.service - Update Engine. May 13 12:35:10.719082 systemd-logind[1502]: New seat seat0. May 13 12:35:10.722295 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 13 12:35:10.724770 extend-filesystems[1531]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 13 12:35:10.724770 extend-filesystems[1531]: old_desc_blocks = 1, new_desc_blocks = 1 May 13 12:35:10.724770 extend-filesystems[1531]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 13 12:35:10.733967 extend-filesystems[1495]: Resized filesystem in /dev/vda9 May 13 12:35:10.726824 systemd[1]: Started systemd-logind.service - User Login Management. May 13 12:35:10.729692 systemd[1]: extend-filesystems.service: Deactivated successfully. May 13 12:35:10.730139 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 13 12:35:10.738837 bash[1551]: Updated "/home/core/.ssh/authorized_keys" May 13 12:35:10.752800 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 13 12:35:10.754727 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 13 12:35:10.767756 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 13 12:35:10.792186 locksmithd[1553]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 13 12:35:10.894802 containerd[1521]: time="2025-05-13T12:35:10Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 May 13 12:35:10.897889 containerd[1521]: time="2025-05-13T12:35:10.897849800Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 May 13 12:35:10.907976 containerd[1521]: time="2025-05-13T12:35:10.907782600Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="9.2µs" May 13 12:35:10.907976 containerd[1521]: time="2025-05-13T12:35:10.907815880Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 May 13 12:35:10.907976 containerd[1521]: time="2025-05-13T12:35:10.907913560Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 May 13 12:35:10.908136 containerd[1521]: time="2025-05-13T12:35:10.908109800Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 May 13 12:35:10.908186 containerd[1521]: time="2025-05-13T12:35:10.908138040Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 May 13 12:35:10.908186 containerd[1521]: time="2025-05-13T12:35:10.908175640Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 13 12:35:10.908251 containerd[1521]: time="2025-05-13T12:35:10.908232240Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 13 12:35:10.908275 containerd[1521]: time="2025-05-13T12:35:10.908249720Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 13 12:35:10.908532 containerd[1521]: time="2025-05-13T12:35:10.908496560Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 13 12:35:10.908532 containerd[1521]: time="2025-05-13T12:35:10.908523240Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 13 12:35:10.908577 containerd[1521]: time="2025-05-13T12:35:10.908536280Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 13 12:35:10.908577 containerd[1521]: time="2025-05-13T12:35:10.908544880Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 May 13 12:35:10.908695 containerd[1521]: time="2025-05-13T12:35:10.908674280Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 May 13 12:35:10.908982 containerd[1521]: time="2025-05-13T12:35:10.908960160Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 13 12:35:10.909014 containerd[1521]: time="2025-05-13T12:35:10.908999120Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 13 12:35:10.909035 containerd[1521]: time="2025-05-13T12:35:10.909013440Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 May 13 12:35:10.909117 containerd[1521]: time="2025-05-13T12:35:10.909097240Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 May 13 12:35:10.909441 containerd[1521]: time="2025-05-13T12:35:10.909418200Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 May 13 12:35:10.909509 containerd[1521]: time="2025-05-13T12:35:10.909492960Z" level=info msg="metadata content store policy set" policy=shared May 13 12:35:10.912819 containerd[1521]: time="2025-05-13T12:35:10.912777640Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 May 13 12:35:10.912905 containerd[1521]: time="2025-05-13T12:35:10.912880520Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 May 13 12:35:10.912956 containerd[1521]: time="2025-05-13T12:35:10.912939800Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 May 13 12:35:10.912991 containerd[1521]: time="2025-05-13T12:35:10.912955040Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 May 13 12:35:10.912991 containerd[1521]: time="2025-05-13T12:35:10.912967800Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 May 13 12:35:10.912991 containerd[1521]: time="2025-05-13T12:35:10.912981880Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 May 13 12:35:10.913100 containerd[1521]: time="2025-05-13T12:35:10.912994120Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 May 13 12:35:10.913100 containerd[1521]: time="2025-05-13T12:35:10.913006280Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 May 13 12:35:10.913214 containerd[1521]: time="2025-05-13T12:35:10.913017040Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 May 13 12:35:10.913241 containerd[1521]: time="2025-05-13T12:35:10.913218320Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 May 13 12:35:10.913241 containerd[1521]: time="2025-05-13T12:35:10.913232880Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 May 13 12:35:10.913279 containerd[1521]: time="2025-05-13T12:35:10.913248280Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 May 13 12:35:10.913393 containerd[1521]: time="2025-05-13T12:35:10.913365120Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 May 13 12:35:10.913417 containerd[1521]: time="2025-05-13T12:35:10.913395360Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 May 13 12:35:10.913417 containerd[1521]: time="2025-05-13T12:35:10.913412440Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 May 13 12:35:10.913449 containerd[1521]: time="2025-05-13T12:35:10.913423720Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 May 13 12:35:10.913449 containerd[1521]: time="2025-05-13T12:35:10.913439720Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 May 13 12:35:10.913484 containerd[1521]: time="2025-05-13T12:35:10.913452760Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 May 13 12:35:10.913484 containerd[1521]: time="2025-05-13T12:35:10.913464640Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 May 13 12:35:10.913484 containerd[1521]: time="2025-05-13T12:35:10.913475480Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 May 13 12:35:10.913535 containerd[1521]: time="2025-05-13T12:35:10.913498360Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 May 13 12:35:10.913535 containerd[1521]: time="2025-05-13T12:35:10.913509640Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 May 13 12:35:10.913535 containerd[1521]: time="2025-05-13T12:35:10.913519840Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 May 13 12:35:10.913724 containerd[1521]: time="2025-05-13T12:35:10.913700160Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" May 13 12:35:10.913761 containerd[1521]: time="2025-05-13T12:35:10.913722600Z" level=info msg="Start snapshots syncer" May 13 12:35:10.913761 containerd[1521]: time="2025-05-13T12:35:10.913750480Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 May 13 12:35:10.914165 containerd[1521]: time="2025-05-13T12:35:10.914105840Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" May 13 12:35:10.914285 containerd[1521]: time="2025-05-13T12:35:10.914230960Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 May 13 12:35:10.914342 containerd[1521]: time="2025-05-13T12:35:10.914320120Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 May 13 12:35:10.914555 containerd[1521]: time="2025-05-13T12:35:10.914522040Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 May 13 12:35:10.914581 containerd[1521]: time="2025-05-13T12:35:10.914565560Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 May 13 12:35:10.914581 containerd[1521]: time="2025-05-13T12:35:10.914578080Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 May 13 12:35:10.914614 containerd[1521]: time="2025-05-13T12:35:10.914590560Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 May 13 12:35:10.914614 containerd[1521]: time="2025-05-13T12:35:10.914606320Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 May 13 12:35:10.914684 containerd[1521]: time="2025-05-13T12:35:10.914665840Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 May 13 12:35:10.914710 containerd[1521]: time="2025-05-13T12:35:10.914688240Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 May 13 12:35:10.914728 containerd[1521]: time="2025-05-13T12:35:10.914720720Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 May 13 12:35:10.914745 containerd[1521]: time="2025-05-13T12:35:10.914732800Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 May 13 12:35:10.914803 containerd[1521]: time="2025-05-13T12:35:10.914785360Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 May 13 12:35:10.914861 containerd[1521]: time="2025-05-13T12:35:10.914847080Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 13 12:35:10.914885 containerd[1521]: time="2025-05-13T12:35:10.914866360Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 13 12:35:10.914885 containerd[1521]: time="2025-05-13T12:35:10.914876440Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 13 12:35:10.914938 containerd[1521]: time="2025-05-13T12:35:10.914886360Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 13 12:35:10.914938 containerd[1521]: time="2025-05-13T12:35:10.914894920Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 May 13 12:35:10.914938 containerd[1521]: time="2025-05-13T12:35:10.914917200Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 May 13 12:35:10.914938 containerd[1521]: time="2025-05-13T12:35:10.914927400Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 May 13 12:35:10.915077 containerd[1521]: time="2025-05-13T12:35:10.915059120Z" level=info msg="runtime interface created" May 13 12:35:10.915077 containerd[1521]: time="2025-05-13T12:35:10.915074600Z" level=info msg="created NRI interface" May 13 12:35:10.915113 containerd[1521]: time="2025-05-13T12:35:10.915087240Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 May 13 12:35:10.915169 containerd[1521]: time="2025-05-13T12:35:10.915100240Z" level=info msg="Connect containerd service" May 13 12:35:10.915206 containerd[1521]: time="2025-05-13T12:35:10.915188280Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 13 12:35:10.916379 containerd[1521]: time="2025-05-13T12:35:10.916342920Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 13 12:35:11.023541 containerd[1521]: time="2025-05-13T12:35:11.023425179Z" level=info msg="Start subscribing containerd event" May 13 12:35:11.023541 containerd[1521]: time="2025-05-13T12:35:11.023505169Z" level=info msg="Start recovering state" May 13 12:35:11.023652 containerd[1521]: time="2025-05-13T12:35:11.023587172Z" level=info msg="Start event monitor" May 13 12:35:11.023652 containerd[1521]: time="2025-05-13T12:35:11.023601181Z" level=info msg="Start cni network conf syncer for default" May 13 12:35:11.023652 containerd[1521]: time="2025-05-13T12:35:11.023618008Z" level=info msg="Start streaming server" May 13 12:35:11.023652 containerd[1521]: time="2025-05-13T12:35:11.023628797Z" level=info msg="Registered namespace \"k8s.io\" with NRI" May 13 12:35:11.023652 containerd[1521]: time="2025-05-13T12:35:11.023635842Z" level=info msg="runtime interface starting up..." May 13 12:35:11.023652 containerd[1521]: time="2025-05-13T12:35:11.023642565Z" level=info msg="starting plugins..." May 13 12:35:11.023802 containerd[1521]: time="2025-05-13T12:35:11.023656776Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" May 13 12:35:11.024971 containerd[1521]: time="2025-05-13T12:35:11.023893606Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 13 12:35:11.024971 containerd[1521]: time="2025-05-13T12:35:11.023972026Z" level=info msg=serving... address=/run/containerd/containerd.sock May 13 12:35:11.024971 containerd[1521]: time="2025-05-13T12:35:11.024037121Z" level=info msg="containerd successfully booted in 0.129641s" May 13 12:35:11.024146 systemd[1]: Started containerd.service - containerd container runtime. May 13 12:35:11.098328 tar[1518]: linux-arm64/README.md May 13 12:35:11.120065 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 13 12:35:11.597097 systemd-networkd[1421]: eth0: Gained IPv6LL May 13 12:35:11.602614 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 13 12:35:11.605250 systemd[1]: Reached target network-online.target - Network is Online. May 13 12:35:11.607879 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... May 13 12:35:11.610569 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 12:35:11.615910 sshd_keygen[1515]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 13 12:35:11.620092 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 13 12:35:11.639436 systemd[1]: coreos-metadata.service: Deactivated successfully. May 13 12:35:11.639660 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. May 13 12:35:11.642572 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 13 12:35:11.644083 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 13 12:35:11.647461 systemd[1]: Starting issuegen.service - Generate /run/issue... May 13 12:35:11.648583 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 13 12:35:11.665735 systemd[1]: issuegen.service: Deactivated successfully. May 13 12:35:11.665982 systemd[1]: Finished issuegen.service - Generate /run/issue. May 13 12:35:11.668851 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 13 12:35:11.701053 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 13 12:35:11.704480 systemd[1]: Started getty@tty1.service - Getty on tty1. May 13 12:35:11.706707 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. May 13 12:35:11.708167 systemd[1]: Reached target getty.target - Login Prompts. May 13 12:35:12.144517 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 12:35:12.146009 systemd[1]: Reached target multi-user.target - Multi-User System. May 13 12:35:12.147054 systemd[1]: Startup finished in 2.119s (kernel) + 5.259s (initrd) + 3.313s (userspace) = 10.693s. May 13 12:35:12.148188 (kubelet)[1628]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 12:35:12.558397 kubelet[1628]: E0513 12:35:12.558278 1628 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 12:35:12.560775 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 12:35:12.560928 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 12:35:12.561248 systemd[1]: kubelet.service: Consumed 791ms CPU time, 247.5M memory peak. May 13 12:35:16.032305 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 13 12:35:16.033416 systemd[1]: Started sshd@0-10.0.0.39:22-10.0.0.1:40232.service - OpenSSH per-connection server daemon (10.0.0.1:40232). May 13 12:35:16.132614 sshd[1641]: Accepted publickey for core from 10.0.0.1 port 40232 ssh2: RSA SHA256:HV7SwMkgpUcGbG5PTBCNGAhaEvexdMAt2yN/TIbGAFk May 13 12:35:16.134254 sshd-session[1641]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:35:16.142004 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 13 12:35:16.142936 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 13 12:35:16.147954 systemd-logind[1502]: New session 1 of user core. May 13 12:35:16.161175 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 13 12:35:16.163589 systemd[1]: Starting user@500.service - User Manager for UID 500... May 13 12:35:16.176738 (systemd)[1645]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 13 12:35:16.178939 systemd-logind[1502]: New session c1 of user core. May 13 12:35:16.277455 systemd[1645]: Queued start job for default target default.target. May 13 12:35:16.283822 systemd[1645]: Created slice app.slice - User Application Slice. May 13 12:35:16.283854 systemd[1645]: Reached target paths.target - Paths. May 13 12:35:16.283890 systemd[1645]: Reached target timers.target - Timers. May 13 12:35:16.285059 systemd[1645]: Starting dbus.socket - D-Bus User Message Bus Socket... May 13 12:35:16.293255 systemd[1645]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 13 12:35:16.293312 systemd[1645]: Reached target sockets.target - Sockets. May 13 12:35:16.293348 systemd[1645]: Reached target basic.target - Basic System. May 13 12:35:16.293374 systemd[1645]: Reached target default.target - Main User Target. May 13 12:35:16.293404 systemd[1645]: Startup finished in 109ms. May 13 12:35:16.293537 systemd[1]: Started user@500.service - User Manager for UID 500. May 13 12:35:16.294790 systemd[1]: Started session-1.scope - Session 1 of User core. May 13 12:35:16.361733 systemd[1]: Started sshd@1-10.0.0.39:22-10.0.0.1:40242.service - OpenSSH per-connection server daemon (10.0.0.1:40242). May 13 12:35:16.412862 sshd[1656]: Accepted publickey for core from 10.0.0.1 port 40242 ssh2: RSA SHA256:HV7SwMkgpUcGbG5PTBCNGAhaEvexdMAt2yN/TIbGAFk May 13 12:35:16.413969 sshd-session[1656]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:35:16.417892 systemd-logind[1502]: New session 2 of user core. May 13 12:35:16.424058 systemd[1]: Started session-2.scope - Session 2 of User core. May 13 12:35:16.473916 sshd[1658]: Connection closed by 10.0.0.1 port 40242 May 13 12:35:16.474164 sshd-session[1656]: pam_unix(sshd:session): session closed for user core May 13 12:35:16.490747 systemd[1]: sshd@1-10.0.0.39:22-10.0.0.1:40242.service: Deactivated successfully. May 13 12:35:16.493160 systemd[1]: session-2.scope: Deactivated successfully. May 13 12:35:16.493744 systemd-logind[1502]: Session 2 logged out. Waiting for processes to exit. May 13 12:35:16.496143 systemd[1]: Started sshd@2-10.0.0.39:22-10.0.0.1:40256.service - OpenSSH per-connection server daemon (10.0.0.1:40256). May 13 12:35:16.497093 systemd-logind[1502]: Removed session 2. May 13 12:35:16.546607 sshd[1664]: Accepted publickey for core from 10.0.0.1 port 40256 ssh2: RSA SHA256:HV7SwMkgpUcGbG5PTBCNGAhaEvexdMAt2yN/TIbGAFk May 13 12:35:16.547622 sshd-session[1664]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:35:16.551962 systemd-logind[1502]: New session 3 of user core. May 13 12:35:16.561089 systemd[1]: Started session-3.scope - Session 3 of User core. May 13 12:35:16.608213 sshd[1666]: Connection closed by 10.0.0.1 port 40256 May 13 12:35:16.608459 sshd-session[1664]: pam_unix(sshd:session): session closed for user core May 13 12:35:16.618615 systemd[1]: sshd@2-10.0.0.39:22-10.0.0.1:40256.service: Deactivated successfully. May 13 12:35:16.619863 systemd[1]: session-3.scope: Deactivated successfully. May 13 12:35:16.620508 systemd-logind[1502]: Session 3 logged out. Waiting for processes to exit. May 13 12:35:16.622481 systemd[1]: Started sshd@3-10.0.0.39:22-10.0.0.1:40260.service - OpenSSH per-connection server daemon (10.0.0.1:40260). May 13 12:35:16.625082 systemd-logind[1502]: Removed session 3. May 13 12:35:16.672426 sshd[1672]: Accepted publickey for core from 10.0.0.1 port 40260 ssh2: RSA SHA256:HV7SwMkgpUcGbG5PTBCNGAhaEvexdMAt2yN/TIbGAFk May 13 12:35:16.673570 sshd-session[1672]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:35:16.676939 systemd-logind[1502]: New session 4 of user core. May 13 12:35:16.692091 systemd[1]: Started session-4.scope - Session 4 of User core. May 13 12:35:16.741938 sshd[1674]: Connection closed by 10.0.0.1 port 40260 May 13 12:35:16.742289 sshd-session[1672]: pam_unix(sshd:session): session closed for user core May 13 12:35:16.753522 systemd[1]: sshd@3-10.0.0.39:22-10.0.0.1:40260.service: Deactivated successfully. May 13 12:35:16.755954 systemd[1]: session-4.scope: Deactivated successfully. May 13 12:35:16.756505 systemd-logind[1502]: Session 4 logged out. Waiting for processes to exit. May 13 12:35:16.758692 systemd[1]: Started sshd@4-10.0.0.39:22-10.0.0.1:40268.service - OpenSSH per-connection server daemon (10.0.0.1:40268). May 13 12:35:16.759149 systemd-logind[1502]: Removed session 4. May 13 12:35:16.822876 sshd[1680]: Accepted publickey for core from 10.0.0.1 port 40268 ssh2: RSA SHA256:HV7SwMkgpUcGbG5PTBCNGAhaEvexdMAt2yN/TIbGAFk May 13 12:35:16.823660 sshd-session[1680]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:35:16.827965 systemd-logind[1502]: New session 5 of user core. May 13 12:35:16.838052 systemd[1]: Started session-5.scope - Session 5 of User core. May 13 12:35:16.894447 sudo[1683]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 13 12:35:16.894704 sudo[1683]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 12:35:16.907449 sudo[1683]: pam_unix(sudo:session): session closed for user root May 13 12:35:16.908758 sshd[1682]: Connection closed by 10.0.0.1 port 40268 May 13 12:35:16.909204 sshd-session[1680]: pam_unix(sshd:session): session closed for user core May 13 12:35:16.922793 systemd[1]: sshd@4-10.0.0.39:22-10.0.0.1:40268.service: Deactivated successfully. May 13 12:35:16.925129 systemd[1]: session-5.scope: Deactivated successfully. May 13 12:35:16.925759 systemd-logind[1502]: Session 5 logged out. Waiting for processes to exit. May 13 12:35:16.928001 systemd[1]: Started sshd@5-10.0.0.39:22-10.0.0.1:40280.service - OpenSSH per-connection server daemon (10.0.0.1:40280). May 13 12:35:16.928804 systemd-logind[1502]: Removed session 5. May 13 12:35:16.982992 sshd[1689]: Accepted publickey for core from 10.0.0.1 port 40280 ssh2: RSA SHA256:HV7SwMkgpUcGbG5PTBCNGAhaEvexdMAt2yN/TIbGAFk May 13 12:35:16.984198 sshd-session[1689]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:35:16.988295 systemd-logind[1502]: New session 6 of user core. May 13 12:35:16.999045 systemd[1]: Started session-6.scope - Session 6 of User core. May 13 12:35:17.048469 sudo[1693]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 13 12:35:17.048719 sudo[1693]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 12:35:17.118424 sudo[1693]: pam_unix(sudo:session): session closed for user root May 13 12:35:17.123458 sudo[1692]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 13 12:35:17.123728 sudo[1692]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 12:35:17.131600 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 13 12:35:17.166552 augenrules[1715]: No rules May 13 12:35:17.167148 systemd[1]: audit-rules.service: Deactivated successfully. May 13 12:35:17.167349 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 13 12:35:17.168119 sudo[1692]: pam_unix(sudo:session): session closed for user root May 13 12:35:17.169202 sshd[1691]: Connection closed by 10.0.0.1 port 40280 May 13 12:35:17.169498 sshd-session[1689]: pam_unix(sshd:session): session closed for user core May 13 12:35:17.184840 systemd[1]: sshd@5-10.0.0.39:22-10.0.0.1:40280.service: Deactivated successfully. May 13 12:35:17.187209 systemd[1]: session-6.scope: Deactivated successfully. May 13 12:35:17.188882 systemd-logind[1502]: Session 6 logged out. Waiting for processes to exit. May 13 12:35:17.190242 systemd[1]: Started sshd@6-10.0.0.39:22-10.0.0.1:40288.service - OpenSSH per-connection server daemon (10.0.0.1:40288). May 13 12:35:17.191066 systemd-logind[1502]: Removed session 6. May 13 12:35:17.236229 sshd[1724]: Accepted publickey for core from 10.0.0.1 port 40288 ssh2: RSA SHA256:HV7SwMkgpUcGbG5PTBCNGAhaEvexdMAt2yN/TIbGAFk May 13 12:35:17.237360 sshd-session[1724]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:35:17.240717 systemd-logind[1502]: New session 7 of user core. May 13 12:35:17.252074 systemd[1]: Started session-7.scope - Session 7 of User core. May 13 12:35:17.301053 sudo[1727]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 13 12:35:17.301315 sudo[1727]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 12:35:17.646179 systemd[1]: Starting docker.service - Docker Application Container Engine... May 13 12:35:17.669166 (dockerd)[1748]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 13 12:35:17.929694 dockerd[1748]: time="2025-05-13T12:35:17.929572290Z" level=info msg="Starting up" May 13 12:35:17.930495 dockerd[1748]: time="2025-05-13T12:35:17.930473879Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" May 13 12:35:17.972367 dockerd[1748]: time="2025-05-13T12:35:17.972227318Z" level=info msg="Loading containers: start." May 13 12:35:17.979929 kernel: Initializing XFRM netlink socket May 13 12:35:18.162125 systemd-networkd[1421]: docker0: Link UP May 13 12:35:18.164610 dockerd[1748]: time="2025-05-13T12:35:18.164566042Z" level=info msg="Loading containers: done." May 13 12:35:18.179896 dockerd[1748]: time="2025-05-13T12:35:18.179799378Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 13 12:35:18.179896 dockerd[1748]: time="2025-05-13T12:35:18.179875690Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 May 13 12:35:18.180049 dockerd[1748]: time="2025-05-13T12:35:18.179998037Z" level=info msg="Initializing buildkit" May 13 12:35:18.203376 dockerd[1748]: time="2025-05-13T12:35:18.203335346Z" level=info msg="Completed buildkit initialization" May 13 12:35:18.208084 dockerd[1748]: time="2025-05-13T12:35:18.208045066Z" level=info msg="Daemon has completed initialization" May 13 12:35:18.208146 dockerd[1748]: time="2025-05-13T12:35:18.208103814Z" level=info msg="API listen on /run/docker.sock" May 13 12:35:18.208265 systemd[1]: Started docker.service - Docker Application Container Engine. May 13 12:35:18.916594 containerd[1521]: time="2025-05-13T12:35:18.916527522Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.4\"" May 13 12:35:18.951033 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2865444322-merged.mount: Deactivated successfully. May 13 12:35:19.640222 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount618705633.mount: Deactivated successfully. May 13 12:35:20.573924 containerd[1521]: time="2025-05-13T12:35:20.573866813Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:35:20.575287 containerd[1521]: time="2025-05-13T12:35:20.575187837Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.4: active requests=0, bytes read=26233120" May 13 12:35:20.576126 containerd[1521]: time="2025-05-13T12:35:20.576081158Z" level=info msg="ImageCreate event name:\"sha256:ab579d62aa850c7d0eca948aad11fcf813743e3b6c9742241c32cb4f1638968b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:35:20.578927 containerd[1521]: time="2025-05-13T12:35:20.578881952Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:631c6cc78b2862be4fed7df3384a643ef7297eebadae22e8ef9cbe2e19b6386f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:35:20.580059 containerd[1521]: time="2025-05-13T12:35:20.580017619Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.4\" with image id \"sha256:ab579d62aa850c7d0eca948aad11fcf813743e3b6c9742241c32cb4f1638968b\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:631c6cc78b2862be4fed7df3384a643ef7297eebadae22e8ef9cbe2e19b6386f\", size \"26229918\" in 1.66339387s" May 13 12:35:20.580123 containerd[1521]: time="2025-05-13T12:35:20.580071362Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.4\" returns image reference \"sha256:ab579d62aa850c7d0eca948aad11fcf813743e3b6c9742241c32cb4f1638968b\"" May 13 12:35:20.580760 containerd[1521]: time="2025-05-13T12:35:20.580656569Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.4\"" May 13 12:35:21.870721 containerd[1521]: time="2025-05-13T12:35:21.870674784Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:35:21.871140 containerd[1521]: time="2025-05-13T12:35:21.871112402Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.4: active requests=0, bytes read=22529573" May 13 12:35:21.872017 containerd[1521]: time="2025-05-13T12:35:21.871965520Z" level=info msg="ImageCreate event name:\"sha256:79534fade29d07745acc698bbf598b0604a9ea1fd7917822c816a74fc0b55965\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:35:21.874364 containerd[1521]: time="2025-05-13T12:35:21.874314920Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:25e29187ea66f0ff9b9a00114849c3a30b649005c900a8b2a69e3f3fa56448fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:35:21.876824 containerd[1521]: time="2025-05-13T12:35:21.876323385Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.4\" with image id \"sha256:79534fade29d07745acc698bbf598b0604a9ea1fd7917822c816a74fc0b55965\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:25e29187ea66f0ff9b9a00114849c3a30b649005c900a8b2a69e3f3fa56448fb\", size \"23971132\" in 1.295632112s" May 13 12:35:21.876824 containerd[1521]: time="2025-05-13T12:35:21.876361890Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.4\" returns image reference \"sha256:79534fade29d07745acc698bbf598b0604a9ea1fd7917822c816a74fc0b55965\"" May 13 12:35:21.877824 containerd[1521]: time="2025-05-13T12:35:21.877793904Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.4\"" May 13 12:35:22.811268 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 13 12:35:22.813920 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 12:35:22.948806 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 12:35:22.952259 (kubelet)[2030]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 12:35:23.050540 kubelet[2030]: E0513 12:35:23.050495 2030 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 12:35:23.053389 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 12:35:23.053527 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 12:35:23.053840 systemd[1]: kubelet.service: Consumed 142ms CPU time, 104.9M memory peak. May 13 12:35:23.067485 containerd[1521]: time="2025-05-13T12:35:23.067384777Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:35:23.068462 containerd[1521]: time="2025-05-13T12:35:23.068431529Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.4: active requests=0, bytes read=17482175" May 13 12:35:23.069410 containerd[1521]: time="2025-05-13T12:35:23.069387483Z" level=info msg="ImageCreate event name:\"sha256:730fbc2590716b8202fcdd928a813b847575ebf03911a059979257cd6cbb8245\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:35:23.072250 containerd[1521]: time="2025-05-13T12:35:23.072201876Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:09c55f8dac59a4b8e5e354140f5a4bdd6fa9bd95c42d6bcba6782ed37c31b5a2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:35:23.073009 containerd[1521]: time="2025-05-13T12:35:23.072983245Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.4\" with image id \"sha256:730fbc2590716b8202fcdd928a813b847575ebf03911a059979257cd6cbb8245\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:09c55f8dac59a4b8e5e354140f5a4bdd6fa9bd95c42d6bcba6782ed37c31b5a2\", size \"18923752\" in 1.195158372s" May 13 12:35:23.073055 containerd[1521]: time="2025-05-13T12:35:23.073015126Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.4\" returns image reference \"sha256:730fbc2590716b8202fcdd928a813b847575ebf03911a059979257cd6cbb8245\"" May 13 12:35:23.073440 containerd[1521]: time="2025-05-13T12:35:23.073404148Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\"" May 13 12:35:24.132977 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2745485306.mount: Deactivated successfully. May 13 12:35:24.469434 containerd[1521]: time="2025-05-13T12:35:24.469309575Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:35:24.469983 containerd[1521]: time="2025-05-13T12:35:24.469941449Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.4: active requests=0, bytes read=27370353" May 13 12:35:24.470642 containerd[1521]: time="2025-05-13T12:35:24.470603878Z" level=info msg="ImageCreate event name:\"sha256:62c496efa595c8eb7d098e43430b2b94ad66812214759a7ea9daaaa1ed901fc7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:35:24.472377 containerd[1521]: time="2025-05-13T12:35:24.472339318Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:35:24.473033 containerd[1521]: time="2025-05-13T12:35:24.473002227Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.4\" with image id \"sha256:62c496efa595c8eb7d098e43430b2b94ad66812214759a7ea9daaaa1ed901fc7\", repo tag \"registry.k8s.io/kube-proxy:v1.32.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36\", size \"27369370\" in 1.399442359s" May 13 12:35:24.473068 containerd[1521]: time="2025-05-13T12:35:24.473035225Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\" returns image reference \"sha256:62c496efa595c8eb7d098e43430b2b94ad66812214759a7ea9daaaa1ed901fc7\"" May 13 12:35:24.473529 containerd[1521]: time="2025-05-13T12:35:24.473503394Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" May 13 12:35:25.019896 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2112575773.mount: Deactivated successfully. May 13 12:35:25.818136 containerd[1521]: time="2025-05-13T12:35:25.818086193Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:35:25.818947 containerd[1521]: time="2025-05-13T12:35:25.818831730Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951624" May 13 12:35:25.820345 containerd[1521]: time="2025-05-13T12:35:25.820305828Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:35:25.822531 containerd[1521]: time="2025-05-13T12:35:25.822478656Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:35:25.823600 containerd[1521]: time="2025-05-13T12:35:25.823572097Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.350036228s" May 13 12:35:25.823658 containerd[1521]: time="2025-05-13T12:35:25.823606251Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" May 13 12:35:25.824438 containerd[1521]: time="2025-05-13T12:35:25.824417133Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 13 12:35:26.286796 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1942936737.mount: Deactivated successfully. May 13 12:35:26.291107 containerd[1521]: time="2025-05-13T12:35:26.291063972Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 12:35:26.291485 containerd[1521]: time="2025-05-13T12:35:26.291455711Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" May 13 12:35:26.292328 containerd[1521]: time="2025-05-13T12:35:26.292300762Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 12:35:26.294500 containerd[1521]: time="2025-05-13T12:35:26.294055400Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 12:35:26.294616 containerd[1521]: time="2025-05-13T12:35:26.294579854Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 470.119079ms" May 13 12:35:26.294655 containerd[1521]: time="2025-05-13T12:35:26.294622611Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" May 13 12:35:26.295274 containerd[1521]: time="2025-05-13T12:35:26.295080247Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" May 13 12:35:26.834462 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2300098926.mount: Deactivated successfully. May 13 12:35:28.756755 containerd[1521]: time="2025-05-13T12:35:28.756653873Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:35:28.757207 containerd[1521]: time="2025-05-13T12:35:28.757173097Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67812471" May 13 12:35:28.758005 containerd[1521]: time="2025-05-13T12:35:28.757977710Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:35:28.760726 containerd[1521]: time="2025-05-13T12:35:28.760699474Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:35:28.762082 containerd[1521]: time="2025-05-13T12:35:28.762049448Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 2.466937455s" May 13 12:35:28.762129 containerd[1521]: time="2025-05-13T12:35:28.762084591Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" May 13 12:35:33.122472 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 13 12:35:33.124170 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 12:35:33.257112 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 12:35:33.260366 (kubelet)[2186]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 12:35:33.291790 kubelet[2186]: E0513 12:35:33.291735 2186 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 12:35:33.294464 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 12:35:33.294708 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 12:35:33.295106 systemd[1]: kubelet.service: Consumed 124ms CPU time, 102.3M memory peak. May 13 12:35:33.664946 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 13 12:35:33.665091 systemd[1]: kubelet.service: Consumed 124ms CPU time, 102.3M memory peak. May 13 12:35:33.666956 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 12:35:33.687021 systemd[1]: Reload requested from client PID 2201 ('systemctl') (unit session-7.scope)... May 13 12:35:33.687038 systemd[1]: Reloading... May 13 12:35:33.758951 zram_generator::config[2253]: No configuration found. May 13 12:35:33.914387 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 12:35:33.997773 systemd[1]: Reloading finished in 310 ms. May 13 12:35:34.051314 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 13 12:35:34.051384 systemd[1]: kubelet.service: Failed with result 'signal'. May 13 12:35:34.051620 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 13 12:35:34.051661 systemd[1]: kubelet.service: Consumed 80ms CPU time, 90.2M memory peak. May 13 12:35:34.053029 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 12:35:34.166195 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 12:35:34.169883 (kubelet)[2289]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 13 12:35:34.205092 kubelet[2289]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 12:35:34.205092 kubelet[2289]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 13 12:35:34.205092 kubelet[2289]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 12:35:34.205462 kubelet[2289]: I0513 12:35:34.205147 2289 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 13 12:35:34.585803 kubelet[2289]: I0513 12:35:34.585754 2289 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" May 13 12:35:34.585803 kubelet[2289]: I0513 12:35:34.585790 2289 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 13 12:35:34.586100 kubelet[2289]: I0513 12:35:34.586069 2289 server.go:954] "Client rotation is on, will bootstrap in background" May 13 12:35:34.618797 kubelet[2289]: E0513 12:35:34.618753 2289 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.39:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.39:6443: connect: connection refused" logger="UnhandledError" May 13 12:35:34.621395 kubelet[2289]: I0513 12:35:34.621368 2289 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 13 12:35:34.627589 kubelet[2289]: I0513 12:35:34.627556 2289 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 13 12:35:34.630238 kubelet[2289]: I0513 12:35:34.630211 2289 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 13 12:35:34.630419 kubelet[2289]: I0513 12:35:34.630388 2289 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 13 12:35:34.630573 kubelet[2289]: I0513 12:35:34.630413 2289 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 13 12:35:34.630664 kubelet[2289]: I0513 12:35:34.630635 2289 topology_manager.go:138] "Creating topology manager with none policy" May 13 12:35:34.630664 kubelet[2289]: I0513 12:35:34.630643 2289 container_manager_linux.go:304] "Creating device plugin manager" May 13 12:35:34.630839 kubelet[2289]: I0513 12:35:34.630813 2289 state_mem.go:36] "Initialized new in-memory state store" May 13 12:35:34.633160 kubelet[2289]: I0513 12:35:34.633117 2289 kubelet.go:446] "Attempting to sync node with API server" May 13 12:35:34.633160 kubelet[2289]: I0513 12:35:34.633148 2289 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 13 12:35:34.633264 kubelet[2289]: I0513 12:35:34.633170 2289 kubelet.go:352] "Adding apiserver pod source" May 13 12:35:34.633264 kubelet[2289]: I0513 12:35:34.633185 2289 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 13 12:35:34.635432 kubelet[2289]: W0513 12:35:34.635383 2289 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.39:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.39:6443: connect: connection refused May 13 12:35:34.635472 kubelet[2289]: E0513 12:35:34.635440 2289 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.39:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.39:6443: connect: connection refused" logger="UnhandledError" May 13 12:35:34.635945 kubelet[2289]: W0513 12:35:34.635913 2289 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.39:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.39:6443: connect: connection refused May 13 12:35:34.636042 kubelet[2289]: E0513 12:35:34.635955 2289 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.39:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.39:6443: connect: connection refused" logger="UnhandledError" May 13 12:35:34.636224 kubelet[2289]: I0513 12:35:34.636205 2289 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 13 12:35:34.636918 kubelet[2289]: I0513 12:35:34.636886 2289 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 13 12:35:34.637040 kubelet[2289]: W0513 12:35:34.637021 2289 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 13 12:35:34.638239 kubelet[2289]: I0513 12:35:34.638212 2289 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 13 12:35:34.638281 kubelet[2289]: I0513 12:35:34.638253 2289 server.go:1287] "Started kubelet" May 13 12:35:34.639036 kubelet[2289]: I0513 12:35:34.638868 2289 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 13 12:35:34.640043 kubelet[2289]: I0513 12:35:34.640018 2289 server.go:490] "Adding debug handlers to kubelet server" May 13 12:35:34.641442 kubelet[2289]: I0513 12:35:34.641346 2289 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 13 12:35:34.641813 kubelet[2289]: I0513 12:35:34.641798 2289 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 13 12:35:34.643496 kubelet[2289]: I0513 12:35:34.643462 2289 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 13 12:35:34.645327 kubelet[2289]: I0513 12:35:34.645303 2289 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 13 12:35:34.646360 kubelet[2289]: I0513 12:35:34.646022 2289 volume_manager.go:297] "Starting Kubelet Volume Manager" May 13 12:35:34.646605 kubelet[2289]: E0513 12:35:34.646582 2289 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 12:35:34.646893 kubelet[2289]: I0513 12:35:34.646870 2289 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 13 12:35:34.646957 kubelet[2289]: I0513 12:35:34.646930 2289 reconciler.go:26] "Reconciler: start to sync state" May 13 12:35:34.647959 kubelet[2289]: E0513 12:35:34.647733 2289 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.39:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.39:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183f1654f815053b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-13 12:35:34.638232891 +0000 UTC m=+0.465184542,LastTimestamp:2025-05-13 12:35:34.638232891 +0000 UTC m=+0.465184542,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 13 12:35:34.648155 kubelet[2289]: W0513 12:35:34.648082 2289 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.39:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.39:6443: connect: connection refused May 13 12:35:34.648197 kubelet[2289]: E0513 12:35:34.648161 2289 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.39:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.39:6443: connect: connection refused" logger="UnhandledError" May 13 12:35:34.648317 kubelet[2289]: E0513 12:35:34.648288 2289 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.39:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.39:6443: connect: connection refused" interval="200ms" May 13 12:35:34.649044 kubelet[2289]: I0513 12:35:34.648446 2289 factory.go:221] Registration of the systemd container factory successfully May 13 12:35:34.649044 kubelet[2289]: I0513 12:35:34.648691 2289 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 13 12:35:34.649357 kubelet[2289]: I0513 12:35:34.649318 2289 factory.go:221] Registration of the containerd container factory successfully May 13 12:35:34.660680 kubelet[2289]: I0513 12:35:34.660638 2289 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 13 12:35:34.661793 kubelet[2289]: I0513 12:35:34.661761 2289 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 13 12:35:34.661793 kubelet[2289]: I0513 12:35:34.661787 2289 status_manager.go:227] "Starting to sync pod status with apiserver" May 13 12:35:34.661858 kubelet[2289]: I0513 12:35:34.661807 2289 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 13 12:35:34.661858 kubelet[2289]: I0513 12:35:34.661814 2289 kubelet.go:2388] "Starting kubelet main sync loop" May 13 12:35:34.661858 kubelet[2289]: E0513 12:35:34.661850 2289 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 13 12:35:34.663214 kubelet[2289]: I0513 12:35:34.663167 2289 cpu_manager.go:221] "Starting CPU manager" policy="none" May 13 12:35:34.663214 kubelet[2289]: I0513 12:35:34.663183 2289 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 13 12:35:34.663214 kubelet[2289]: I0513 12:35:34.663198 2289 state_mem.go:36] "Initialized new in-memory state store" May 13 12:35:34.664628 kubelet[2289]: W0513 12:35:34.664526 2289 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.39:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.39:6443: connect: connection refused May 13 12:35:34.664628 kubelet[2289]: E0513 12:35:34.664578 2289 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.39:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.39:6443: connect: connection refused" logger="UnhandledError" May 13 12:35:34.747207 kubelet[2289]: E0513 12:35:34.747174 2289 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 12:35:34.762368 kubelet[2289]: E0513 12:35:34.762335 2289 kubelet.go:2412] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 13 12:35:34.781301 kubelet[2289]: I0513 12:35:34.781275 2289 policy_none.go:49] "None policy: Start" May 13 12:35:34.781301 kubelet[2289]: I0513 12:35:34.781302 2289 memory_manager.go:186] "Starting memorymanager" policy="None" May 13 12:35:34.781374 kubelet[2289]: I0513 12:35:34.781316 2289 state_mem.go:35] "Initializing new in-memory state store" May 13 12:35:34.786367 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 13 12:35:34.802642 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 13 12:35:34.819129 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 13 12:35:34.820768 kubelet[2289]: I0513 12:35:34.820742 2289 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 13 12:35:34.821210 kubelet[2289]: I0513 12:35:34.820930 2289 eviction_manager.go:189] "Eviction manager: starting control loop" May 13 12:35:34.821210 kubelet[2289]: I0513 12:35:34.820947 2289 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 13 12:35:34.821210 kubelet[2289]: I0513 12:35:34.821196 2289 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 13 12:35:34.822063 kubelet[2289]: E0513 12:35:34.822029 2289 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 13 12:35:34.822135 kubelet[2289]: E0513 12:35:34.822078 2289 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 13 12:35:34.849099 kubelet[2289]: E0513 12:35:34.849023 2289 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.39:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.39:6443: connect: connection refused" interval="400ms" May 13 12:35:34.922080 kubelet[2289]: I0513 12:35:34.922059 2289 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 13 12:35:34.922409 kubelet[2289]: E0513 12:35:34.922388 2289 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.39:6443/api/v1/nodes\": dial tcp 10.0.0.39:6443: connect: connection refused" node="localhost" May 13 12:35:34.970421 systemd[1]: Created slice kubepods-burstable-podf4e85c26d733a99926ef278de36f8dd9.slice - libcontainer container kubepods-burstable-podf4e85c26d733a99926ef278de36f8dd9.slice. May 13 12:35:34.990664 kubelet[2289]: E0513 12:35:34.990626 2289 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 13 12:35:34.993743 systemd[1]: Created slice kubepods-burstable-pod5386fe11ed933ab82453de11903c7f47.slice - libcontainer container kubepods-burstable-pod5386fe11ed933ab82453de11903c7f47.slice. May 13 12:35:34.995337 kubelet[2289]: E0513 12:35:34.995308 2289 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 13 12:35:35.014452 systemd[1]: Created slice kubepods-burstable-pod2980a8ab51edc665be10a02e33130e15.slice - libcontainer container kubepods-burstable-pod2980a8ab51edc665be10a02e33130e15.slice. May 13 12:35:35.016599 kubelet[2289]: E0513 12:35:35.016579 2289 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 13 12:35:35.047845 kubelet[2289]: I0513 12:35:35.047818 2289 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f4e85c26d733a99926ef278de36f8dd9-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"f4e85c26d733a99926ef278de36f8dd9\") " pod="kube-system/kube-apiserver-localhost" May 13 12:35:35.124041 kubelet[2289]: I0513 12:35:35.124008 2289 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 13 12:35:35.124306 kubelet[2289]: E0513 12:35:35.124282 2289 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.39:6443/api/v1/nodes\": dial tcp 10.0.0.39:6443: connect: connection refused" node="localhost" May 13 12:35:35.148973 kubelet[2289]: I0513 12:35:35.148842 2289 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 13 12:35:35.148973 kubelet[2289]: I0513 12:35:35.148889 2289 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 13 12:35:35.149063 kubelet[2289]: I0513 12:35:35.148978 2289 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 13 12:35:35.149063 kubelet[2289]: I0513 12:35:35.149038 2289 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2980a8ab51edc665be10a02e33130e15-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"2980a8ab51edc665be10a02e33130e15\") " pod="kube-system/kube-scheduler-localhost" May 13 12:35:35.149126 kubelet[2289]: I0513 12:35:35.149093 2289 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f4e85c26d733a99926ef278de36f8dd9-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"f4e85c26d733a99926ef278de36f8dd9\") " pod="kube-system/kube-apiserver-localhost" May 13 12:35:35.149150 kubelet[2289]: I0513 12:35:35.149122 2289 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f4e85c26d733a99926ef278de36f8dd9-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"f4e85c26d733a99926ef278de36f8dd9\") " pod="kube-system/kube-apiserver-localhost" May 13 12:35:35.149150 kubelet[2289]: I0513 12:35:35.149140 2289 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 13 12:35:35.149206 kubelet[2289]: I0513 12:35:35.149155 2289 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 13 12:35:35.249577 kubelet[2289]: E0513 12:35:35.249534 2289 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.39:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.39:6443: connect: connection refused" interval="800ms" May 13 12:35:35.292262 kubelet[2289]: E0513 12:35:35.292232 2289 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 12:35:35.292839 containerd[1521]: time="2025-05-13T12:35:35.292774751Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:f4e85c26d733a99926ef278de36f8dd9,Namespace:kube-system,Attempt:0,}" May 13 12:35:35.296022 kubelet[2289]: E0513 12:35:35.296001 2289 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 12:35:35.296331 containerd[1521]: time="2025-05-13T12:35:35.296303589Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:5386fe11ed933ab82453de11903c7f47,Namespace:kube-system,Attempt:0,}" May 13 12:35:35.314070 containerd[1521]: time="2025-05-13T12:35:35.314032284Z" level=info msg="connecting to shim 6b237bb8d0d1ee95f219efca39e83e62f4ab96f87ea42ab63827c6b8286fcb5d" address="unix:///run/containerd/s/0af9494fee2e1e605332cdbb4380fdb3f10ae29ab93c59cd63301749e01a394d" namespace=k8s.io protocol=ttrpc version=3 May 13 12:35:35.318636 kubelet[2289]: E0513 12:35:35.317882 2289 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 12:35:35.319215 containerd[1521]: time="2025-05-13T12:35:35.319177543Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:2980a8ab51edc665be10a02e33130e15,Namespace:kube-system,Attempt:0,}" May 13 12:35:35.320536 containerd[1521]: time="2025-05-13T12:35:35.320488805Z" level=info msg="connecting to shim 1a072ec349284712dc409b60af1a610df0e531428b357063967f5f389b2c30c8" address="unix:///run/containerd/s/8c2c0bc63ffff5c98abf235872eb1ffe355d9c6aa090635a7282e94c41bbb9fe" namespace=k8s.io protocol=ttrpc version=3 May 13 12:35:35.337061 systemd[1]: Started cri-containerd-6b237bb8d0d1ee95f219efca39e83e62f4ab96f87ea42ab63827c6b8286fcb5d.scope - libcontainer container 6b237bb8d0d1ee95f219efca39e83e62f4ab96f87ea42ab63827c6b8286fcb5d. May 13 12:35:35.348815 containerd[1521]: time="2025-05-13T12:35:35.348768686Z" level=info msg="connecting to shim 4a1ea1defce63b762e60176ba0b2fa4978252e647f20c775350cb30c11bfd586" address="unix:///run/containerd/s/e31c19d563ec722ee77b2dc8a476274cc55d55ad995169aa2aad9f3d4b90b89a" namespace=k8s.io protocol=ttrpc version=3 May 13 12:35:35.349036 systemd[1]: Started cri-containerd-1a072ec349284712dc409b60af1a610df0e531428b357063967f5f389b2c30c8.scope - libcontainer container 1a072ec349284712dc409b60af1a610df0e531428b357063967f5f389b2c30c8. May 13 12:35:35.375105 systemd[1]: Started cri-containerd-4a1ea1defce63b762e60176ba0b2fa4978252e647f20c775350cb30c11bfd586.scope - libcontainer container 4a1ea1defce63b762e60176ba0b2fa4978252e647f20c775350cb30c11bfd586. May 13 12:35:35.385693 containerd[1521]: time="2025-05-13T12:35:35.385651606Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:f4e85c26d733a99926ef278de36f8dd9,Namespace:kube-system,Attempt:0,} returns sandbox id \"6b237bb8d0d1ee95f219efca39e83e62f4ab96f87ea42ab63827c6b8286fcb5d\"" May 13 12:35:35.387197 kubelet[2289]: E0513 12:35:35.387157 2289 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 12:35:35.387525 containerd[1521]: time="2025-05-13T12:35:35.387330243Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:5386fe11ed933ab82453de11903c7f47,Namespace:kube-system,Attempt:0,} returns sandbox id \"1a072ec349284712dc409b60af1a610df0e531428b357063967f5f389b2c30c8\"" May 13 12:35:35.388076 kubelet[2289]: E0513 12:35:35.388043 2289 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 12:35:35.392606 containerd[1521]: time="2025-05-13T12:35:35.392556244Z" level=info msg="CreateContainer within sandbox \"6b237bb8d0d1ee95f219efca39e83e62f4ab96f87ea42ab63827c6b8286fcb5d\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 13 12:35:35.392783 containerd[1521]: time="2025-05-13T12:35:35.392742652Z" level=info msg="CreateContainer within sandbox \"1a072ec349284712dc409b60af1a610df0e531428b357063967f5f389b2c30c8\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 13 12:35:35.400818 containerd[1521]: time="2025-05-13T12:35:35.400789587Z" level=info msg="Container 176adc4b7238e38829fab23bd6ba8e602bf07c21091817b248deb6b456054273: CDI devices from CRI Config.CDIDevices: []" May 13 12:35:35.402270 containerd[1521]: time="2025-05-13T12:35:35.402237444Z" level=info msg="Container 1e969233b1d09f96f4b12ff06c91d64f1da335c87bc06319f7a64d7ffbb4deae: CDI devices from CRI Config.CDIDevices: []" May 13 12:35:35.409623 containerd[1521]: time="2025-05-13T12:35:35.409586837Z" level=info msg="CreateContainer within sandbox \"1a072ec349284712dc409b60af1a610df0e531428b357063967f5f389b2c30c8\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"1e969233b1d09f96f4b12ff06c91d64f1da335c87bc06319f7a64d7ffbb4deae\"" May 13 12:35:35.410410 containerd[1521]: time="2025-05-13T12:35:35.410385485Z" level=info msg="StartContainer for \"1e969233b1d09f96f4b12ff06c91d64f1da335c87bc06319f7a64d7ffbb4deae\"" May 13 12:35:35.412482 containerd[1521]: time="2025-05-13T12:35:35.412321949Z" level=info msg="connecting to shim 1e969233b1d09f96f4b12ff06c91d64f1da335c87bc06319f7a64d7ffbb4deae" address="unix:///run/containerd/s/8c2c0bc63ffff5c98abf235872eb1ffe355d9c6aa090635a7282e94c41bbb9fe" protocol=ttrpc version=3 May 13 12:35:35.412617 containerd[1521]: time="2025-05-13T12:35:35.412592339Z" level=info msg="CreateContainer within sandbox \"6b237bb8d0d1ee95f219efca39e83e62f4ab96f87ea42ab63827c6b8286fcb5d\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"176adc4b7238e38829fab23bd6ba8e602bf07c21091817b248deb6b456054273\"" May 13 12:35:35.413750 containerd[1521]: time="2025-05-13T12:35:35.413722073Z" level=info msg="StartContainer for \"176adc4b7238e38829fab23bd6ba8e602bf07c21091817b248deb6b456054273\"" May 13 12:35:35.414710 containerd[1521]: time="2025-05-13T12:35:35.414674561Z" level=info msg="connecting to shim 176adc4b7238e38829fab23bd6ba8e602bf07c21091817b248deb6b456054273" address="unix:///run/containerd/s/0af9494fee2e1e605332cdbb4380fdb3f10ae29ab93c59cd63301749e01a394d" protocol=ttrpc version=3 May 13 12:35:35.415964 containerd[1521]: time="2025-05-13T12:35:35.415885316Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:2980a8ab51edc665be10a02e33130e15,Namespace:kube-system,Attempt:0,} returns sandbox id \"4a1ea1defce63b762e60176ba0b2fa4978252e647f20c775350cb30c11bfd586\"" May 13 12:35:35.416582 kubelet[2289]: E0513 12:35:35.416558 2289 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 12:35:35.418328 containerd[1521]: time="2025-05-13T12:35:35.418021472Z" level=info msg="CreateContainer within sandbox \"4a1ea1defce63b762e60176ba0b2fa4978252e647f20c775350cb30c11bfd586\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 13 12:35:35.423937 containerd[1521]: time="2025-05-13T12:35:35.423893081Z" level=info msg="Container 7a088d5d157676e7c3b381d0d1c0513b6c696d485e378fe7893945cc01eb583e: CDI devices from CRI Config.CDIDevices: []" May 13 12:35:35.430830 containerd[1521]: time="2025-05-13T12:35:35.430797518Z" level=info msg="CreateContainer within sandbox \"4a1ea1defce63b762e60176ba0b2fa4978252e647f20c775350cb30c11bfd586\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"7a088d5d157676e7c3b381d0d1c0513b6c696d485e378fe7893945cc01eb583e\"" May 13 12:35:35.431307 containerd[1521]: time="2025-05-13T12:35:35.431285085Z" level=info msg="StartContainer for \"7a088d5d157676e7c3b381d0d1c0513b6c696d485e378fe7893945cc01eb583e\"" May 13 12:35:35.432042 systemd[1]: Started cri-containerd-176adc4b7238e38829fab23bd6ba8e602bf07c21091817b248deb6b456054273.scope - libcontainer container 176adc4b7238e38829fab23bd6ba8e602bf07c21091817b248deb6b456054273. May 13 12:35:35.432395 containerd[1521]: time="2025-05-13T12:35:35.432311832Z" level=info msg="connecting to shim 7a088d5d157676e7c3b381d0d1c0513b6c696d485e378fe7893945cc01eb583e" address="unix:///run/containerd/s/e31c19d563ec722ee77b2dc8a476274cc55d55ad995169aa2aad9f3d4b90b89a" protocol=ttrpc version=3 May 13 12:35:35.433224 systemd[1]: Started cri-containerd-1e969233b1d09f96f4b12ff06c91d64f1da335c87bc06319f7a64d7ffbb4deae.scope - libcontainer container 1e969233b1d09f96f4b12ff06c91d64f1da335c87bc06319f7a64d7ffbb4deae. May 13 12:35:35.458044 systemd[1]: Started cri-containerd-7a088d5d157676e7c3b381d0d1c0513b6c696d485e378fe7893945cc01eb583e.scope - libcontainer container 7a088d5d157676e7c3b381d0d1c0513b6c696d485e378fe7893945cc01eb583e. May 13 12:35:35.492202 containerd[1521]: time="2025-05-13T12:35:35.490226027Z" level=info msg="StartContainer for \"176adc4b7238e38829fab23bd6ba8e602bf07c21091817b248deb6b456054273\" returns successfully" May 13 12:35:35.492202 containerd[1521]: time="2025-05-13T12:35:35.490332735Z" level=info msg="StartContainer for \"1e969233b1d09f96f4b12ff06c91d64f1da335c87bc06319f7a64d7ffbb4deae\" returns successfully" May 13 12:35:35.518700 kubelet[2289]: W0513 12:35:35.518601 2289 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.39:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.39:6443: connect: connection refused May 13 12:35:35.518700 kubelet[2289]: E0513 12:35:35.518665 2289 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.39:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.39:6443: connect: connection refused" logger="UnhandledError" May 13 12:35:35.526754 kubelet[2289]: I0513 12:35:35.526723 2289 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 13 12:35:35.528310 kubelet[2289]: E0513 12:35:35.528181 2289 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.39:6443/api/v1/nodes\": dial tcp 10.0.0.39:6443: connect: connection refused" node="localhost" May 13 12:35:35.541823 containerd[1521]: time="2025-05-13T12:35:35.541685142Z" level=info msg="StartContainer for \"7a088d5d157676e7c3b381d0d1c0513b6c696d485e378fe7893945cc01eb583e\" returns successfully" May 13 12:35:35.672482 kubelet[2289]: E0513 12:35:35.672375 2289 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 13 12:35:35.672893 kubelet[2289]: E0513 12:35:35.672669 2289 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 12:35:35.673305 kubelet[2289]: E0513 12:35:35.673286 2289 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 13 12:35:35.673413 kubelet[2289]: E0513 12:35:35.673398 2289 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 12:35:35.681052 kubelet[2289]: E0513 12:35:35.681031 2289 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 13 12:35:35.681161 kubelet[2289]: E0513 12:35:35.681145 2289 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 12:35:36.330841 kubelet[2289]: I0513 12:35:36.330808 2289 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 13 12:35:36.681808 kubelet[2289]: E0513 12:35:36.681781 2289 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 13 12:35:36.681932 kubelet[2289]: E0513 12:35:36.681895 2289 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 12:35:36.682658 kubelet[2289]: E0513 12:35:36.682479 2289 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 13 12:35:36.682658 kubelet[2289]: E0513 12:35:36.682615 2289 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 12:35:37.347997 kubelet[2289]: E0513 12:35:37.347950 2289 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" May 13 12:35:37.476354 kubelet[2289]: I0513 12:35:37.476307 2289 kubelet_node_status.go:79] "Successfully registered node" node="localhost" May 13 12:35:37.476354 kubelet[2289]: E0513 12:35:37.476346 2289 kubelet_node_status.go:549] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" May 13 12:35:37.547498 kubelet[2289]: I0513 12:35:37.547464 2289 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 13 12:35:37.555612 kubelet[2289]: E0513 12:35:37.555400 2289 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" May 13 12:35:37.555612 kubelet[2289]: I0513 12:35:37.555432 2289 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 13 12:35:37.557298 kubelet[2289]: E0513 12:35:37.557253 2289 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" May 13 12:35:37.557298 kubelet[2289]: I0513 12:35:37.557274 2289 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 13 12:35:37.558844 kubelet[2289]: E0513 12:35:37.558807 2289 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" May 13 12:35:37.635765 kubelet[2289]: I0513 12:35:37.635732 2289 apiserver.go:52] "Watching apiserver" May 13 12:35:37.647953 kubelet[2289]: I0513 12:35:37.647882 2289 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 13 12:35:37.686852 kubelet[2289]: I0513 12:35:37.686719 2289 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 13 12:35:37.688292 kubelet[2289]: E0513 12:35:37.688266 2289 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" May 13 12:35:37.688414 kubelet[2289]: E0513 12:35:37.688401 2289 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 12:35:38.017892 kubelet[2289]: I0513 12:35:38.017785 2289 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 13 12:35:38.019832 kubelet[2289]: E0513 12:35:38.019799 2289 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" May 13 12:35:38.019993 kubelet[2289]: E0513 12:35:38.019977 2289 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 12:35:39.136957 systemd[1]: Reload requested from client PID 2565 ('systemctl') (unit session-7.scope)... May 13 12:35:39.136973 systemd[1]: Reloading... May 13 12:35:39.197938 zram_generator::config[2614]: No configuration found. May 13 12:35:39.257813 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 12:35:39.355261 systemd[1]: Reloading finished in 218 ms. May 13 12:35:39.384611 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 13 12:35:39.396926 systemd[1]: kubelet.service: Deactivated successfully. May 13 12:35:39.397202 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 13 12:35:39.397257 systemd[1]: kubelet.service: Consumed 854ms CPU time, 123.1M memory peak. May 13 12:35:39.399053 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 12:35:39.558598 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 12:35:39.562434 (kubelet)[2650]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 13 12:35:39.600730 kubelet[2650]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 12:35:39.600730 kubelet[2650]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 13 12:35:39.600730 kubelet[2650]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 12:35:39.600730 kubelet[2650]: I0513 12:35:39.599690 2650 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 13 12:35:39.607468 kubelet[2650]: I0513 12:35:39.607438 2650 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" May 13 12:35:39.607468 kubelet[2650]: I0513 12:35:39.607465 2650 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 13 12:35:39.607718 kubelet[2650]: I0513 12:35:39.607702 2650 server.go:954] "Client rotation is on, will bootstrap in background" May 13 12:35:39.610477 kubelet[2650]: I0513 12:35:39.610457 2650 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 13 12:35:39.613705 kubelet[2650]: I0513 12:35:39.613668 2650 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 13 12:35:39.619432 kubelet[2650]: I0513 12:35:39.618456 2650 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 13 12:35:39.623835 kubelet[2650]: I0513 12:35:39.623774 2650 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 13 12:35:39.624100 kubelet[2650]: I0513 12:35:39.624062 2650 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 13 12:35:39.624285 kubelet[2650]: I0513 12:35:39.624100 2650 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 13 12:35:39.624363 kubelet[2650]: I0513 12:35:39.624295 2650 topology_manager.go:138] "Creating topology manager with none policy" May 13 12:35:39.624363 kubelet[2650]: I0513 12:35:39.624303 2650 container_manager_linux.go:304] "Creating device plugin manager" May 13 12:35:39.624363 kubelet[2650]: I0513 12:35:39.624346 2650 state_mem.go:36] "Initialized new in-memory state store" May 13 12:35:39.624482 kubelet[2650]: I0513 12:35:39.624470 2650 kubelet.go:446] "Attempting to sync node with API server" May 13 12:35:39.624507 kubelet[2650]: I0513 12:35:39.624483 2650 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 13 12:35:39.624507 kubelet[2650]: I0513 12:35:39.624504 2650 kubelet.go:352] "Adding apiserver pod source" May 13 12:35:39.624549 kubelet[2650]: I0513 12:35:39.624515 2650 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 13 12:35:39.625555 kubelet[2650]: I0513 12:35:39.625345 2650 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 13 12:35:39.627270 kubelet[2650]: I0513 12:35:39.627242 2650 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 13 12:35:39.627656 kubelet[2650]: I0513 12:35:39.627642 2650 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 13 12:35:39.627701 kubelet[2650]: I0513 12:35:39.627676 2650 server.go:1287] "Started kubelet" May 13 12:35:39.627765 kubelet[2650]: I0513 12:35:39.627737 2650 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 13 12:35:39.628656 kubelet[2650]: I0513 12:35:39.628626 2650 server.go:490] "Adding debug handlers to kubelet server" May 13 12:35:39.628938 kubelet[2650]: I0513 12:35:39.627882 2650 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 13 12:35:39.629141 kubelet[2650]: I0513 12:35:39.629119 2650 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 13 12:35:39.630613 kubelet[2650]: I0513 12:35:39.630583 2650 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 13 12:35:39.630686 kubelet[2650]: I0513 12:35:39.630657 2650 volume_manager.go:297] "Starting Kubelet Volume Manager" May 13 12:35:39.630823 kubelet[2650]: I0513 12:35:39.630731 2650 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 13 12:35:39.632211 kubelet[2650]: I0513 12:35:39.632185 2650 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 13 12:35:39.632324 kubelet[2650]: I0513 12:35:39.632307 2650 reconciler.go:26] "Reconciler: start to sync state" May 13 12:35:39.633247 kubelet[2650]: E0513 12:35:39.632936 2650 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 12:35:39.635470 kubelet[2650]: I0513 12:35:39.635444 2650 factory.go:221] Registration of the systemd container factory successfully May 13 12:35:39.635571 kubelet[2650]: I0513 12:35:39.635548 2650 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 13 12:35:39.650180 kubelet[2650]: E0513 12:35:39.649373 2650 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 13 12:35:39.651742 kubelet[2650]: I0513 12:35:39.651449 2650 factory.go:221] Registration of the containerd container factory successfully May 13 12:35:39.658541 kubelet[2650]: I0513 12:35:39.658507 2650 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 13 12:35:39.659584 kubelet[2650]: I0513 12:35:39.659565 2650 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 13 12:35:39.659696 kubelet[2650]: I0513 12:35:39.659685 2650 status_manager.go:227] "Starting to sync pod status with apiserver" May 13 12:35:39.660135 kubelet[2650]: I0513 12:35:39.659853 2650 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 13 12:35:39.660135 kubelet[2650]: I0513 12:35:39.659870 2650 kubelet.go:2388] "Starting kubelet main sync loop" May 13 12:35:39.660135 kubelet[2650]: E0513 12:35:39.659950 2650 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 13 12:35:39.686479 kubelet[2650]: I0513 12:35:39.686456 2650 cpu_manager.go:221] "Starting CPU manager" policy="none" May 13 12:35:39.686612 kubelet[2650]: I0513 12:35:39.686600 2650 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 13 12:35:39.686681 kubelet[2650]: I0513 12:35:39.686661 2650 state_mem.go:36] "Initialized new in-memory state store" May 13 12:35:39.686969 kubelet[2650]: I0513 12:35:39.686953 2650 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 13 12:35:39.687067 kubelet[2650]: I0513 12:35:39.687043 2650 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 13 12:35:39.687140 kubelet[2650]: I0513 12:35:39.687132 2650 policy_none.go:49] "None policy: Start" May 13 12:35:39.687188 kubelet[2650]: I0513 12:35:39.687179 2650 memory_manager.go:186] "Starting memorymanager" policy="None" May 13 12:35:39.687249 kubelet[2650]: I0513 12:35:39.687241 2650 state_mem.go:35] "Initializing new in-memory state store" May 13 12:35:39.687430 kubelet[2650]: I0513 12:35:39.687418 2650 state_mem.go:75] "Updated machine memory state" May 13 12:35:39.691394 kubelet[2650]: I0513 12:35:39.691375 2650 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 13 12:35:39.691729 kubelet[2650]: I0513 12:35:39.691711 2650 eviction_manager.go:189] "Eviction manager: starting control loop" May 13 12:35:39.691829 kubelet[2650]: I0513 12:35:39.691800 2650 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 13 12:35:39.692972 kubelet[2650]: I0513 12:35:39.692940 2650 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 13 12:35:39.693142 kubelet[2650]: E0513 12:35:39.693069 2650 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 13 12:35:39.760824 kubelet[2650]: I0513 12:35:39.760790 2650 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 13 12:35:39.760938 kubelet[2650]: I0513 12:35:39.760842 2650 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 13 12:35:39.760962 kubelet[2650]: I0513 12:35:39.760948 2650 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 13 12:35:39.796347 kubelet[2650]: I0513 12:35:39.796316 2650 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 13 12:35:39.802318 kubelet[2650]: I0513 12:35:39.802170 2650 kubelet_node_status.go:125] "Node was previously registered" node="localhost" May 13 12:35:39.802318 kubelet[2650]: I0513 12:35:39.802239 2650 kubelet_node_status.go:79] "Successfully registered node" node="localhost" May 13 12:35:39.833246 kubelet[2650]: I0513 12:35:39.833207 2650 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f4e85c26d733a99926ef278de36f8dd9-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"f4e85c26d733a99926ef278de36f8dd9\") " pod="kube-system/kube-apiserver-localhost" May 13 12:35:39.833246 kubelet[2650]: I0513 12:35:39.833244 2650 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 13 12:35:39.833359 kubelet[2650]: I0513 12:35:39.833268 2650 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 13 12:35:39.833359 kubelet[2650]: I0513 12:35:39.833287 2650 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2980a8ab51edc665be10a02e33130e15-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"2980a8ab51edc665be10a02e33130e15\") " pod="kube-system/kube-scheduler-localhost" May 13 12:35:39.833359 kubelet[2650]: I0513 12:35:39.833323 2650 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f4e85c26d733a99926ef278de36f8dd9-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"f4e85c26d733a99926ef278de36f8dd9\") " pod="kube-system/kube-apiserver-localhost" May 13 12:35:39.833359 kubelet[2650]: I0513 12:35:39.833340 2650 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f4e85c26d733a99926ef278de36f8dd9-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"f4e85c26d733a99926ef278de36f8dd9\") " pod="kube-system/kube-apiserver-localhost" May 13 12:35:39.833446 kubelet[2650]: I0513 12:35:39.833373 2650 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 13 12:35:39.833446 kubelet[2650]: I0513 12:35:39.833391 2650 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 13 12:35:39.833487 kubelet[2650]: I0513 12:35:39.833435 2650 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 13 12:35:40.066531 kubelet[2650]: E0513 12:35:40.066407 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 12:35:40.066531 kubelet[2650]: E0513 12:35:40.066489 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 12:35:40.066770 kubelet[2650]: E0513 12:35:40.066585 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 12:35:40.625384 kubelet[2650]: I0513 12:35:40.625302 2650 apiserver.go:52] "Watching apiserver" May 13 12:35:40.632317 kubelet[2650]: I0513 12:35:40.632285 2650 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 13 12:35:40.674861 kubelet[2650]: I0513 12:35:40.674816 2650 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 13 12:35:40.675305 kubelet[2650]: E0513 12:35:40.675227 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 12:35:40.675305 kubelet[2650]: I0513 12:35:40.675240 2650 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 13 12:35:40.681085 kubelet[2650]: E0513 12:35:40.681002 2650 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 13 12:35:40.681176 kubelet[2650]: E0513 12:35:40.681126 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 12:35:40.682411 kubelet[2650]: E0513 12:35:40.682381 2650 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" May 13 12:35:40.682597 kubelet[2650]: E0513 12:35:40.682531 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 12:35:40.698215 kubelet[2650]: I0513 12:35:40.698141 2650 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.6981116429999998 podStartE2EDuration="1.698111643s" podCreationTimestamp="2025-05-13 12:35:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 12:35:40.691675424 +0000 UTC m=+1.126253436" watchObservedRunningTime="2025-05-13 12:35:40.698111643 +0000 UTC m=+1.132689655" May 13 12:35:40.698376 kubelet[2650]: I0513 12:35:40.698260 2650 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.698254822 podStartE2EDuration="1.698254822s" podCreationTimestamp="2025-05-13 12:35:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 12:35:40.698223098 +0000 UTC m=+1.132801110" watchObservedRunningTime="2025-05-13 12:35:40.698254822 +0000 UTC m=+1.132832834" May 13 12:35:40.712217 kubelet[2650]: I0513 12:35:40.712152 2650 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.712075908 podStartE2EDuration="1.712075908s" podCreationTimestamp="2025-05-13 12:35:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 12:35:40.705376133 +0000 UTC m=+1.139954145" watchObservedRunningTime="2025-05-13 12:35:40.712075908 +0000 UTC m=+1.146653920" May 13 12:35:41.677213 kubelet[2650]: E0513 12:35:41.677053 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 12:35:41.677958 kubelet[2650]: E0513 12:35:41.677416 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 12:35:42.678789 kubelet[2650]: E0513 12:35:42.678664 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 12:35:44.020124 kubelet[2650]: I0513 12:35:44.020096 2650 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 13 12:35:44.020433 containerd[1521]: time="2025-05-13T12:35:44.020373036Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 13 12:35:44.020620 kubelet[2650]: I0513 12:35:44.020547 2650 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 13 12:35:44.561132 sudo[1727]: pam_unix(sudo:session): session closed for user root May 13 12:35:44.562455 sshd[1726]: Connection closed by 10.0.0.1 port 40288 May 13 12:35:44.562955 sshd-session[1724]: pam_unix(sshd:session): session closed for user core May 13 12:35:44.568525 systemd[1]: sshd@6-10.0.0.39:22-10.0.0.1:40288.service: Deactivated successfully. May 13 12:35:44.572512 systemd[1]: session-7.scope: Deactivated successfully. May 13 12:35:44.572821 systemd[1]: session-7.scope: Consumed 6.802s CPU time, 231.2M memory peak. May 13 12:35:44.573856 systemd-logind[1502]: Session 7 logged out. Waiting for processes to exit. May 13 12:35:44.575128 systemd-logind[1502]: Removed session 7. May 13 12:35:45.048195 kubelet[2650]: E0513 12:35:45.048160 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 12:35:45.085154 systemd[1]: Created slice kubepods-besteffort-pod74c90907_3c6d_4c48_a832_cfb6deb967d1.slice - libcontainer container kubepods-besteffort-pod74c90907_3c6d_4c48_a832_cfb6deb967d1.slice. May 13 12:35:45.168413 kubelet[2650]: I0513 12:35:45.168371 2650 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/74c90907-3c6d-4c48-a832-cfb6deb967d1-xtables-lock\") pod \"kube-proxy-rk6lq\" (UID: \"74c90907-3c6d-4c48-a832-cfb6deb967d1\") " pod="kube-system/kube-proxy-rk6lq" May 13 12:35:45.168413 kubelet[2650]: I0513 12:35:45.168406 2650 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/74c90907-3c6d-4c48-a832-cfb6deb967d1-lib-modules\") pod \"kube-proxy-rk6lq\" (UID: \"74c90907-3c6d-4c48-a832-cfb6deb967d1\") " pod="kube-system/kube-proxy-rk6lq" May 13 12:35:45.168615 kubelet[2650]: I0513 12:35:45.168424 2650 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-br6p6\" (UniqueName: \"kubernetes.io/projected/74c90907-3c6d-4c48-a832-cfb6deb967d1-kube-api-access-br6p6\") pod \"kube-proxy-rk6lq\" (UID: \"74c90907-3c6d-4c48-a832-cfb6deb967d1\") " pod="kube-system/kube-proxy-rk6lq" May 13 12:35:45.168615 kubelet[2650]: I0513 12:35:45.168444 2650 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/74c90907-3c6d-4c48-a832-cfb6deb967d1-kube-proxy\") pod \"kube-proxy-rk6lq\" (UID: \"74c90907-3c6d-4c48-a832-cfb6deb967d1\") " pod="kube-system/kube-proxy-rk6lq" May 13 12:35:45.188225 systemd[1]: Created slice kubepods-besteffort-pod83af652e_aee3_4d5b_bf89_af79969cd17b.slice - libcontainer container kubepods-besteffort-pod83af652e_aee3_4d5b_bf89_af79969cd17b.slice. May 13 12:35:45.268795 kubelet[2650]: I0513 12:35:45.268756 2650 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/83af652e-aee3-4d5b-bf89-af79969cd17b-var-lib-calico\") pod \"tigera-operator-789496d6f5-564kp\" (UID: \"83af652e-aee3-4d5b-bf89-af79969cd17b\") " pod="tigera-operator/tigera-operator-789496d6f5-564kp" May 13 12:35:45.269084 kubelet[2650]: I0513 12:35:45.269045 2650 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbj4v\" (UniqueName: \"kubernetes.io/projected/83af652e-aee3-4d5b-bf89-af79969cd17b-kube-api-access-cbj4v\") pod \"tigera-operator-789496d6f5-564kp\" (UID: \"83af652e-aee3-4d5b-bf89-af79969cd17b\") " pod="tigera-operator/tigera-operator-789496d6f5-564kp" May 13 12:35:45.394225 kubelet[2650]: E0513 12:35:45.394175 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 12:35:45.394675 containerd[1521]: time="2025-05-13T12:35:45.394622293Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rk6lq,Uid:74c90907-3c6d-4c48-a832-cfb6deb967d1,Namespace:kube-system,Attempt:0,}" May 13 12:35:45.412728 containerd[1521]: time="2025-05-13T12:35:45.412693231Z" level=info msg="connecting to shim 8c1ff6ebc97b80b327e3081c6acf6c0a47fda21d2c4d1d7d4579d9340068b066" address="unix:///run/containerd/s/6521f88007c4c36e9786039982a8fcb7d8b8874a26d4dbef55bc15cf54e7ed2f" namespace=k8s.io protocol=ttrpc version=3 May 13 12:35:45.435075 systemd[1]: Started cri-containerd-8c1ff6ebc97b80b327e3081c6acf6c0a47fda21d2c4d1d7d4579d9340068b066.scope - libcontainer container 8c1ff6ebc97b80b327e3081c6acf6c0a47fda21d2c4d1d7d4579d9340068b066. May 13 12:35:45.453306 containerd[1521]: time="2025-05-13T12:35:45.453258116Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rk6lq,Uid:74c90907-3c6d-4c48-a832-cfb6deb967d1,Namespace:kube-system,Attempt:0,} returns sandbox id \"8c1ff6ebc97b80b327e3081c6acf6c0a47fda21d2c4d1d7d4579d9340068b066\"" May 13 12:35:45.454762 kubelet[2650]: E0513 12:35:45.454590 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 12:35:45.456576 containerd[1521]: time="2025-05-13T12:35:45.456522409Z" level=info msg="CreateContainer within sandbox \"8c1ff6ebc97b80b327e3081c6acf6c0a47fda21d2c4d1d7d4579d9340068b066\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 13 12:35:45.476470 containerd[1521]: time="2025-05-13T12:35:45.476434388Z" level=info msg="Container 7a2c782fe021ed1191fcab22719a0ab58324fd6672b1c973737fb39428334261: CDI devices from CRI Config.CDIDevices: []" May 13 12:35:45.485639 containerd[1521]: time="2025-05-13T12:35:45.485541981Z" level=info msg="CreateContainer within sandbox \"8c1ff6ebc97b80b327e3081c6acf6c0a47fda21d2c4d1d7d4579d9340068b066\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"7a2c782fe021ed1191fcab22719a0ab58324fd6672b1c973737fb39428334261\"" May 13 12:35:45.486325 containerd[1521]: time="2025-05-13T12:35:45.486293751Z" level=info msg="StartContainer for \"7a2c782fe021ed1191fcab22719a0ab58324fd6672b1c973737fb39428334261\"" May 13 12:35:45.488445 containerd[1521]: time="2025-05-13T12:35:45.488409528Z" level=info msg="connecting to shim 7a2c782fe021ed1191fcab22719a0ab58324fd6672b1c973737fb39428334261" address="unix:///run/containerd/s/6521f88007c4c36e9786039982a8fcb7d8b8874a26d4dbef55bc15cf54e7ed2f" protocol=ttrpc version=3 May 13 12:35:45.491211 containerd[1521]: time="2025-05-13T12:35:45.491180269Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-789496d6f5-564kp,Uid:83af652e-aee3-4d5b-bf89-af79969cd17b,Namespace:tigera-operator,Attempt:0,}" May 13 12:35:45.506698 containerd[1521]: time="2025-05-13T12:35:45.506656398Z" level=info msg="connecting to shim bac00b698582958632b3f4f5c5a8eaefce0914f911a45b78f601c85f779dd495" address="unix:///run/containerd/s/65e5380f604d40630b3e862dd161e75dfbfa8359f78000967b790d4aa8d4c422" namespace=k8s.io protocol=ttrpc version=3 May 13 12:35:45.510257 systemd[1]: Started cri-containerd-7a2c782fe021ed1191fcab22719a0ab58324fd6672b1c973737fb39428334261.scope - libcontainer container 7a2c782fe021ed1191fcab22719a0ab58324fd6672b1c973737fb39428334261. May 13 12:35:45.533106 systemd[1]: Started cri-containerd-bac00b698582958632b3f4f5c5a8eaefce0914f911a45b78f601c85f779dd495.scope - libcontainer container bac00b698582958632b3f4f5c5a8eaefce0914f911a45b78f601c85f779dd495. May 13 12:35:45.564294 containerd[1521]: time="2025-05-13T12:35:45.564253194Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-789496d6f5-564kp,Uid:83af652e-aee3-4d5b-bf89-af79969cd17b,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"bac00b698582958632b3f4f5c5a8eaefce0914f911a45b78f601c85f779dd495\"" May 13 12:35:45.566065 containerd[1521]: time="2025-05-13T12:35:45.566031430Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\"" May 13 12:35:45.582572 containerd[1521]: time="2025-05-13T12:35:45.582530105Z" level=info msg="StartContainer for \"7a2c782fe021ed1191fcab22719a0ab58324fd6672b1c973737fb39428334261\" returns successfully" May 13 12:35:45.685168 kubelet[2650]: E0513 12:35:45.685067 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 12:35:45.685399 kubelet[2650]: E0513 12:35:45.685380 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 12:35:46.645862 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3189855198.mount: Deactivated successfully. May 13 12:35:47.710365 containerd[1521]: time="2025-05-13T12:35:47.710142763Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:35:47.711149 containerd[1521]: time="2025-05-13T12:35:47.710911488Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.7: active requests=0, bytes read=19323084" May 13 12:35:47.711756 containerd[1521]: time="2025-05-13T12:35:47.711728096Z" level=info msg="ImageCreate event name:\"sha256:27f7c2cfac802523e44ecd16453a4cc992f6c7d610c13054f2715a7cb4370565\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:35:47.714169 containerd[1521]: time="2025-05-13T12:35:47.714130116Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:35:47.714974 containerd[1521]: time="2025-05-13T12:35:47.714947444Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.7\" with image id \"sha256:27f7c2cfac802523e44ecd16453a4cc992f6c7d610c13054f2715a7cb4370565\", repo tag \"quay.io/tigera/operator:v1.36.7\", repo digest \"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\", size \"19319079\" in 2.148879772s" May 13 12:35:47.715059 containerd[1521]: time="2025-05-13T12:35:47.715044689Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\" returns image reference \"sha256:27f7c2cfac802523e44ecd16453a4cc992f6c7d610c13054f2715a7cb4370565\"" May 13 12:35:47.722781 containerd[1521]: time="2025-05-13T12:35:47.722749898Z" level=info msg="CreateContainer within sandbox \"bac00b698582958632b3f4f5c5a8eaefce0914f911a45b78f601c85f779dd495\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" May 13 12:35:47.731136 containerd[1521]: time="2025-05-13T12:35:47.731095785Z" level=info msg="Container 6abce37faddcdd387dd27fbea6ee4e5c3a148d6cdcba8c895752fd715f76536c: CDI devices from CRI Config.CDIDevices: []" May 13 12:35:47.733297 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3353129021.mount: Deactivated successfully. May 13 12:35:47.736613 containerd[1521]: time="2025-05-13T12:35:47.736579144Z" level=info msg="CreateContainer within sandbox \"bac00b698582958632b3f4f5c5a8eaefce0914f911a45b78f601c85f779dd495\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"6abce37faddcdd387dd27fbea6ee4e5c3a148d6cdcba8c895752fd715f76536c\"" May 13 12:35:47.737017 containerd[1521]: time="2025-05-13T12:35:47.736957727Z" level=info msg="StartContainer for \"6abce37faddcdd387dd27fbea6ee4e5c3a148d6cdcba8c895752fd715f76536c\"" May 13 12:35:47.737701 containerd[1521]: time="2025-05-13T12:35:47.737672848Z" level=info msg="connecting to shim 6abce37faddcdd387dd27fbea6ee4e5c3a148d6cdcba8c895752fd715f76536c" address="unix:///run/containerd/s/65e5380f604d40630b3e862dd161e75dfbfa8359f78000967b790d4aa8d4c422" protocol=ttrpc version=3 May 13 12:35:47.783120 systemd[1]: Started cri-containerd-6abce37faddcdd387dd27fbea6ee4e5c3a148d6cdcba8c895752fd715f76536c.scope - libcontainer container 6abce37faddcdd387dd27fbea6ee4e5c3a148d6cdcba8c895752fd715f76536c. May 13 12:35:47.804947 containerd[1521]: time="2025-05-13T12:35:47.804916648Z" level=info msg="StartContainer for \"6abce37faddcdd387dd27fbea6ee4e5c3a148d6cdcba8c895752fd715f76536c\" returns successfully" May 13 12:35:48.213576 kubelet[2650]: E0513 12:35:48.213501 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 12:35:48.240970 kubelet[2650]: I0513 12:35:48.239603 2650 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-rk6lq" podStartSLOduration=3.239588278 podStartE2EDuration="3.239588278s" podCreationTimestamp="2025-05-13 12:35:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 12:35:45.701568467 +0000 UTC m=+6.136146479" watchObservedRunningTime="2025-05-13 12:35:48.239588278 +0000 UTC m=+8.674166290" May 13 12:35:48.692467 kubelet[2650]: E0513 12:35:48.692436 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 12:35:48.701303 kubelet[2650]: I0513 12:35:48.701248 2650 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-789496d6f5-564kp" podStartSLOduration=1.547619127 podStartE2EDuration="3.70123298s" podCreationTimestamp="2025-05-13 12:35:45 +0000 UTC" firstStartedPulling="2025-05-13 12:35:45.5652649 +0000 UTC m=+5.999842912" lastFinishedPulling="2025-05-13 12:35:47.718878753 +0000 UTC m=+8.153456765" observedRunningTime="2025-05-13 12:35:48.701120294 +0000 UTC m=+9.135698306" watchObservedRunningTime="2025-05-13 12:35:48.70123298 +0000 UTC m=+9.135810992" May 13 12:35:49.229348 kubelet[2650]: E0513 12:35:49.229316 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 12:35:49.693481 kubelet[2650]: E0513 12:35:49.693438 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 12:35:49.693859 kubelet[2650]: E0513 12:35:49.693700 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 12:35:50.694582 kubelet[2650]: E0513 12:35:50.694552 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 12:35:52.840717 systemd[1]: Created slice kubepods-besteffort-pod23d2644b_c844_424d_af0c_6fd757e8a17d.slice - libcontainer container kubepods-besteffort-pod23d2644b_c844_424d_af0c_6fd757e8a17d.slice. May 13 12:35:52.923872 kubelet[2650]: I0513 12:35:52.922869 2650 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/23d2644b-c844-424d-af0c-6fd757e8a17d-tigera-ca-bundle\") pod \"calico-typha-75bb5cb966-cndrx\" (UID: \"23d2644b-c844-424d-af0c-6fd757e8a17d\") " pod="calico-system/calico-typha-75bb5cb966-cndrx" May 13 12:35:52.924214 kubelet[2650]: I0513 12:35:52.923914 2650 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/23d2644b-c844-424d-af0c-6fd757e8a17d-typha-certs\") pod \"calico-typha-75bb5cb966-cndrx\" (UID: \"23d2644b-c844-424d-af0c-6fd757e8a17d\") " pod="calico-system/calico-typha-75bb5cb966-cndrx" May 13 12:35:52.924214 kubelet[2650]: I0513 12:35:52.923968 2650 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4t8ds\" (UniqueName: \"kubernetes.io/projected/23d2644b-c844-424d-af0c-6fd757e8a17d-kube-api-access-4t8ds\") pod \"calico-typha-75bb5cb966-cndrx\" (UID: \"23d2644b-c844-424d-af0c-6fd757e8a17d\") " pod="calico-system/calico-typha-75bb5cb966-cndrx" May 13 12:35:53.022309 systemd[1]: Created slice kubepods-besteffort-podca25cf27_8015_46cf_93df_d8a354a37e87.slice - libcontainer container kubepods-besteffort-podca25cf27_8015_46cf_93df_d8a354a37e87.slice. May 13 12:35:53.125078 kubelet[2650]: I0513 12:35:53.125022 2650 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ca25cf27-8015-46cf-93df-d8a354a37e87-tigera-ca-bundle\") pod \"calico-node-7db44\" (UID: \"ca25cf27-8015-46cf-93df-d8a354a37e87\") " pod="calico-system/calico-node-7db44" May 13 12:35:53.125219 kubelet[2650]: I0513 12:35:53.125099 2650 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/ca25cf27-8015-46cf-93df-d8a354a37e87-node-certs\") pod \"calico-node-7db44\" (UID: \"ca25cf27-8015-46cf-93df-d8a354a37e87\") " pod="calico-system/calico-node-7db44" May 13 12:35:53.125219 kubelet[2650]: I0513 12:35:53.125173 2650 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/ca25cf27-8015-46cf-93df-d8a354a37e87-policysync\") pod \"calico-node-7db44\" (UID: \"ca25cf27-8015-46cf-93df-d8a354a37e87\") " pod="calico-system/calico-node-7db44" May 13 12:35:53.125219 kubelet[2650]: I0513 12:35:53.125195 2650 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/ca25cf27-8015-46cf-93df-d8a354a37e87-var-lib-calico\") pod \"calico-node-7db44\" (UID: \"ca25cf27-8015-46cf-93df-d8a354a37e87\") " pod="calico-system/calico-node-7db44" May 13 12:35:53.125219 kubelet[2650]: I0513 12:35:53.125212 2650 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/ca25cf27-8015-46cf-93df-d8a354a37e87-flexvol-driver-host\") pod \"calico-node-7db44\" (UID: \"ca25cf27-8015-46cf-93df-d8a354a37e87\") " pod="calico-system/calico-node-7db44" May 13 12:35:53.125445 kubelet[2650]: I0513 12:35:53.125256 2650 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57wrr\" (UniqueName: \"kubernetes.io/projected/ca25cf27-8015-46cf-93df-d8a354a37e87-kube-api-access-57wrr\") pod \"calico-node-7db44\" (UID: \"ca25cf27-8015-46cf-93df-d8a354a37e87\") " pod="calico-system/calico-node-7db44" May 13 12:35:53.125445 kubelet[2650]: I0513 12:35:53.125275 2650 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/ca25cf27-8015-46cf-93df-d8a354a37e87-cni-bin-dir\") pod \"calico-node-7db44\" (UID: \"ca25cf27-8015-46cf-93df-d8a354a37e87\") " pod="calico-system/calico-node-7db44" May 13 12:35:53.125445 kubelet[2650]: I0513 12:35:53.125294 2650 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/ca25cf27-8015-46cf-93df-d8a354a37e87-cni-net-dir\") pod \"calico-node-7db44\" (UID: \"ca25cf27-8015-46cf-93df-d8a354a37e87\") " pod="calico-system/calico-node-7db44" May 13 12:35:53.125445 kubelet[2650]: I0513 12:35:53.125311 2650 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/ca25cf27-8015-46cf-93df-d8a354a37e87-cni-log-dir\") pod \"calico-node-7db44\" (UID: \"ca25cf27-8015-46cf-93df-d8a354a37e87\") " pod="calico-system/calico-node-7db44" May 13 12:35:53.125445 kubelet[2650]: I0513 12:35:53.125349 2650 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ca25cf27-8015-46cf-93df-d8a354a37e87-xtables-lock\") pod \"calico-node-7db44\" (UID: \"ca25cf27-8015-46cf-93df-d8a354a37e87\") " pod="calico-system/calico-node-7db44" May 13 12:35:53.125589 kubelet[2650]: I0513 12:35:53.125368 2650 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/ca25cf27-8015-46cf-93df-d8a354a37e87-var-run-calico\") pod \"calico-node-7db44\" (UID: \"ca25cf27-8015-46cf-93df-d8a354a37e87\") " pod="calico-system/calico-node-7db44" May 13 12:35:53.125589 kubelet[2650]: I0513 12:35:53.125388 2650 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ca25cf27-8015-46cf-93df-d8a354a37e87-lib-modules\") pod \"calico-node-7db44\" (UID: \"ca25cf27-8015-46cf-93df-d8a354a37e87\") " pod="calico-system/calico-node-7db44" May 13 12:35:53.144341 kubelet[2650]: E0513 12:35:53.144319 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 12:35:53.145333 containerd[1521]: time="2025-05-13T12:35:53.144992958Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-75bb5cb966-cndrx,Uid:23d2644b-c844-424d-af0c-6fd757e8a17d,Namespace:calico-system,Attempt:0,}" May 13 12:35:53.178163 containerd[1521]: time="2025-05-13T12:35:53.178079514Z" level=info msg="connecting to shim 6f4d876266614c6968c13d962f31e924c6ed0fc271d8fa7da1d3f11c58fdd0ec" address="unix:///run/containerd/s/288a3b4a5be0804d9c0239b4e084b9f0eb3c9a4f343e862b11a04b8ac9ad0def" namespace=k8s.io protocol=ttrpc version=3 May 13 12:35:53.203448 kubelet[2650]: E0513 12:35:53.203282 2650 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7kdvf" podUID="a13363c5-db73-4c15-bc44-8be9849ef5ce" May 13 12:35:53.225111 systemd[1]: Started cri-containerd-6f4d876266614c6968c13d962f31e924c6ed0fc271d8fa7da1d3f11c58fdd0ec.scope - libcontainer container 6f4d876266614c6968c13d962f31e924c6ed0fc271d8fa7da1d3f11c58fdd0ec. May 13 12:35:53.226464 kubelet[2650]: I0513 12:35:53.226418 2650 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d42jp\" (UniqueName: \"kubernetes.io/projected/a13363c5-db73-4c15-bc44-8be9849ef5ce-kube-api-access-d42jp\") pod \"csi-node-driver-7kdvf\" (UID: \"a13363c5-db73-4c15-bc44-8be9849ef5ce\") " pod="calico-system/csi-node-driver-7kdvf" May 13 12:35:53.226464 kubelet[2650]: I0513 12:35:53.226462 2650 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a13363c5-db73-4c15-bc44-8be9849ef5ce-kubelet-dir\") pod \"csi-node-driver-7kdvf\" (UID: \"a13363c5-db73-4c15-bc44-8be9849ef5ce\") " pod="calico-system/csi-node-driver-7kdvf" May 13 12:35:53.226621 kubelet[2650]: I0513 12:35:53.226479 2650 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/a13363c5-db73-4c15-bc44-8be9849ef5ce-socket-dir\") pod \"csi-node-driver-7kdvf\" (UID: \"a13363c5-db73-4c15-bc44-8be9849ef5ce\") " pod="calico-system/csi-node-driver-7kdvf" May 13 12:35:53.226621 kubelet[2650]: I0513 12:35:53.226525 2650 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/a13363c5-db73-4c15-bc44-8be9849ef5ce-varrun\") pod \"csi-node-driver-7kdvf\" (UID: \"a13363c5-db73-4c15-bc44-8be9849ef5ce\") " pod="calico-system/csi-node-driver-7kdvf" May 13 12:35:53.226621 kubelet[2650]: I0513 12:35:53.226540 2650 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/a13363c5-db73-4c15-bc44-8be9849ef5ce-registration-dir\") pod \"csi-node-driver-7kdvf\" (UID: \"a13363c5-db73-4c15-bc44-8be9849ef5ce\") " pod="calico-system/csi-node-driver-7kdvf" May 13 12:35:53.231518 kubelet[2650]: E0513 12:35:53.231472 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:35:53.231518 kubelet[2650]: W0513 12:35:53.231493 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:35:53.231518 kubelet[2650]: E0513 12:35:53.231519 2650 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:35:53.231827 kubelet[2650]: E0513 12:35:53.231799 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:35:53.231827 kubelet[2650]: W0513 12:35:53.231818 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:35:53.231827 kubelet[2650]: E0513 12:35:53.231828 2650 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:35:53.239634 kubelet[2650]: E0513 12:35:53.238914 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:35:53.239634 kubelet[2650]: W0513 12:35:53.238934 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:35:53.239634 kubelet[2650]: E0513 12:35:53.238948 2650 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:35:53.261106 containerd[1521]: time="2025-05-13T12:35:53.261067574Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-75bb5cb966-cndrx,Uid:23d2644b-c844-424d-af0c-6fd757e8a17d,Namespace:calico-system,Attempt:0,} returns sandbox id \"6f4d876266614c6968c13d962f31e924c6ed0fc271d8fa7da1d3f11c58fdd0ec\"" May 13 12:35:53.261836 kubelet[2650]: E0513 12:35:53.261816 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 12:35:53.265619 containerd[1521]: time="2025-05-13T12:35:53.265589285Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\"" May 13 12:35:53.326892 kubelet[2650]: E0513 12:35:53.326830 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 12:35:53.327251 kubelet[2650]: E0513 12:35:53.327213 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:35:53.327251 kubelet[2650]: W0513 12:35:53.327230 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:35:53.327399 kubelet[2650]: E0513 12:35:53.327335 2650 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:35:53.327450 containerd[1521]: time="2025-05-13T12:35:53.327392292Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-7db44,Uid:ca25cf27-8015-46cf-93df-d8a354a37e87,Namespace:calico-system,Attempt:0,}" May 13 12:35:53.327815 kubelet[2650]: E0513 12:35:53.327785 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:35:53.327815 kubelet[2650]: W0513 12:35:53.327797 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:35:53.327992 kubelet[2650]: E0513 12:35:53.327932 2650 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:35:53.328321 kubelet[2650]: E0513 12:35:53.328216 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:35:53.328321 kubelet[2650]: W0513 12:35:53.328230 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:35:53.328515 kubelet[2650]: E0513 12:35:53.328247 2650 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:35:53.328555 kubelet[2650]: E0513 12:35:53.328515 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:35:53.328555 kubelet[2650]: W0513 12:35:53.328529 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:35:53.328601 kubelet[2650]: E0513 12:35:53.328568 2650 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:35:53.328802 kubelet[2650]: E0513 12:35:53.328747 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:35:53.328802 kubelet[2650]: W0513 12:35:53.328757 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:35:53.328802 kubelet[2650]: E0513 12:35:53.328773 2650 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:35:53.329069 kubelet[2650]: E0513 12:35:53.329050 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:35:53.329069 kubelet[2650]: W0513 12:35:53.329065 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:35:53.329154 kubelet[2650]: E0513 12:35:53.329081 2650 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:35:53.329870 kubelet[2650]: E0513 12:35:53.329815 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:35:53.329870 kubelet[2650]: W0513 12:35:53.329835 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:35:53.330507 kubelet[2650]: E0513 12:35:53.330015 2650 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:35:53.330507 kubelet[2650]: E0513 12:35:53.330455 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:35:53.330507 kubelet[2650]: W0513 12:35:53.330467 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:35:53.330507 kubelet[2650]: E0513 12:35:53.330507 2650 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:35:53.330762 kubelet[2650]: E0513 12:35:53.330689 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:35:53.330762 kubelet[2650]: W0513 12:35:53.330703 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:35:53.330762 kubelet[2650]: E0513 12:35:53.330734 2650 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:35:53.331017 kubelet[2650]: E0513 12:35:53.330861 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:35:53.331017 kubelet[2650]: W0513 12:35:53.330870 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:35:53.331017 kubelet[2650]: E0513 12:35:53.330892 2650 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:35:53.331359 kubelet[2650]: E0513 12:35:53.331059 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:35:53.331359 kubelet[2650]: W0513 12:35:53.331070 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:35:53.331359 kubelet[2650]: E0513 12:35:53.331154 2650 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:35:53.331359 kubelet[2650]: E0513 12:35:53.331222 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:35:53.331359 kubelet[2650]: W0513 12:35:53.331245 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:35:53.331359 kubelet[2650]: E0513 12:35:53.331262 2650 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:35:53.331967 kubelet[2650]: E0513 12:35:53.331928 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:35:53.331967 kubelet[2650]: W0513 12:35:53.331941 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:35:53.331967 kubelet[2650]: E0513 12:35:53.331956 2650 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:35:53.332519 kubelet[2650]: E0513 12:35:53.332403 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:35:53.332519 kubelet[2650]: W0513 12:35:53.332420 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:35:53.332519 kubelet[2650]: E0513 12:35:53.332441 2650 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:35:53.332882 kubelet[2650]: E0513 12:35:53.332820 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:35:53.332882 kubelet[2650]: W0513 12:35:53.332834 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:35:53.332882 kubelet[2650]: E0513 12:35:53.332876 2650 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:35:53.333311 kubelet[2650]: E0513 12:35:53.333298 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:35:53.333507 kubelet[2650]: W0513 12:35:53.333409 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:35:53.333507 kubelet[2650]: E0513 12:35:53.333449 2650 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:35:53.334325 kubelet[2650]: E0513 12:35:53.333669 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:35:53.334325 kubelet[2650]: W0513 12:35:53.333682 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:35:53.334325 kubelet[2650]: E0513 12:35:53.333725 2650 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:35:53.335116 kubelet[2650]: E0513 12:35:53.334943 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:35:53.335116 kubelet[2650]: W0513 12:35:53.334960 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:35:53.335116 kubelet[2650]: E0513 12:35:53.334988 2650 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:35:53.335556 kubelet[2650]: E0513 12:35:53.335446 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:35:53.335556 kubelet[2650]: W0513 12:35:53.335461 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:35:53.335556 kubelet[2650]: E0513 12:35:53.335508 2650 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:35:53.335829 kubelet[2650]: E0513 12:35:53.335815 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:35:53.335930 kubelet[2650]: W0513 12:35:53.335893 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:35:53.336025 kubelet[2650]: E0513 12:35:53.336001 2650 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:35:53.336301 kubelet[2650]: E0513 12:35:53.336191 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:35:53.336301 kubelet[2650]: W0513 12:35:53.336204 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:35:53.336301 kubelet[2650]: E0513 12:35:53.336254 2650 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:35:53.336614 kubelet[2650]: E0513 12:35:53.336600 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:35:53.336840 kubelet[2650]: W0513 12:35:53.336728 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:35:53.336840 kubelet[2650]: E0513 12:35:53.336755 2650 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:35:53.337166 kubelet[2650]: E0513 12:35:53.337151 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:35:53.337233 kubelet[2650]: W0513 12:35:53.337221 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:35:53.337294 kubelet[2650]: E0513 12:35:53.337282 2650 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:35:53.337527 kubelet[2650]: E0513 12:35:53.337511 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:35:53.337580 kubelet[2650]: W0513 12:35:53.337526 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:35:53.337580 kubelet[2650]: E0513 12:35:53.337538 2650 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:35:53.337996 kubelet[2650]: E0513 12:35:53.337887 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:35:53.337996 kubelet[2650]: W0513 12:35:53.337912 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:35:53.337996 kubelet[2650]: E0513 12:35:53.337923 2650 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:35:53.343869 kubelet[2650]: E0513 12:35:53.343846 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:35:53.343869 kubelet[2650]: W0513 12:35:53.343867 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:35:53.344061 kubelet[2650]: E0513 12:35:53.343887 2650 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:35:53.352628 containerd[1521]: time="2025-05-13T12:35:53.352531553Z" level=info msg="connecting to shim 7ae2aa0b138978c98df44c05adaffb7f97130d79aac8a0696da4c4a751e31cbb" address="unix:///run/containerd/s/edf2aff33a041cc4885815bd91ed4e46ecfcba70a1d2f342e0e41dd610b5576a" namespace=k8s.io protocol=ttrpc version=3 May 13 12:35:53.380134 systemd[1]: Started cri-containerd-7ae2aa0b138978c98df44c05adaffb7f97130d79aac8a0696da4c4a751e31cbb.scope - libcontainer container 7ae2aa0b138978c98df44c05adaffb7f97130d79aac8a0696da4c4a751e31cbb. May 13 12:35:53.427098 containerd[1521]: time="2025-05-13T12:35:53.427055536Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-7db44,Uid:ca25cf27-8015-46cf-93df-d8a354a37e87,Namespace:calico-system,Attempt:0,} returns sandbox id \"7ae2aa0b138978c98df44c05adaffb7f97130d79aac8a0696da4c4a751e31cbb\"" May 13 12:35:53.427839 kubelet[2650]: E0513 12:35:53.427820 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 12:35:54.660196 kubelet[2650]: E0513 12:35:54.660129 2650 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7kdvf" podUID="a13363c5-db73-4c15-bc44-8be9849ef5ce" May 13 12:35:56.457010 update_engine[1508]: I20250513 12:35:56.456925 1508 update_attempter.cc:509] Updating boot flags... May 13 12:35:56.660542 kubelet[2650]: E0513 12:35:56.660503 2650 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7kdvf" podUID="a13363c5-db73-4c15-bc44-8be9849ef5ce" May 13 12:35:57.423687 containerd[1521]: time="2025-05-13T12:35:57.423647838Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:35:57.424672 containerd[1521]: time="2025-05-13T12:35:57.424649793Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.3: active requests=0, bytes read=28370571" May 13 12:35:57.425490 containerd[1521]: time="2025-05-13T12:35:57.425464781Z" level=info msg="ImageCreate event name:\"sha256:26e730979a07ea7452715da6ac48076016018bc982c06ebd32d5e095f42d3d54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:35:57.427642 containerd[1521]: time="2025-05-13T12:35:57.427609775Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:35:57.428534 containerd[1521]: time="2025-05-13T12:35:57.428494285Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.3\" with image id \"sha256:26e730979a07ea7452715da6ac48076016018bc982c06ebd32d5e095f42d3d54\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\", size \"29739745\" in 4.162871959s" May 13 12:35:57.428534 containerd[1521]: time="2025-05-13T12:35:57.428528486Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\" returns image reference \"sha256:26e730979a07ea7452715da6ac48076016018bc982c06ebd32d5e095f42d3d54\"" May 13 12:35:57.431205 containerd[1521]: time="2025-05-13T12:35:57.431174098Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\"" May 13 12:35:57.446327 containerd[1521]: time="2025-05-13T12:35:57.445991528Z" level=info msg="CreateContainer within sandbox \"6f4d876266614c6968c13d962f31e924c6ed0fc271d8fa7da1d3f11c58fdd0ec\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" May 13 12:35:57.453436 containerd[1521]: time="2025-05-13T12:35:57.453091972Z" level=info msg="Container 6662df8ada31c1bfedc71fdc1fe561331257ed76b9218ebf7d7988fac500eff6: CDI devices from CRI Config.CDIDevices: []" May 13 12:35:57.455791 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3192817863.mount: Deactivated successfully. May 13 12:35:57.460070 containerd[1521]: time="2025-05-13T12:35:57.460036691Z" level=info msg="CreateContainer within sandbox \"6f4d876266614c6968c13d962f31e924c6ed0fc271d8fa7da1d3f11c58fdd0ec\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"6662df8ada31c1bfedc71fdc1fe561331257ed76b9218ebf7d7988fac500eff6\"" May 13 12:35:57.460557 containerd[1521]: time="2025-05-13T12:35:57.460520228Z" level=info msg="StartContainer for \"6662df8ada31c1bfedc71fdc1fe561331257ed76b9218ebf7d7988fac500eff6\"" May 13 12:35:57.461758 containerd[1521]: time="2025-05-13T12:35:57.461725510Z" level=info msg="connecting to shim 6662df8ada31c1bfedc71fdc1fe561331257ed76b9218ebf7d7988fac500eff6" address="unix:///run/containerd/s/288a3b4a5be0804d9c0239b4e084b9f0eb3c9a4f343e862b11a04b8ac9ad0def" protocol=ttrpc version=3 May 13 12:35:57.481072 systemd[1]: Started cri-containerd-6662df8ada31c1bfedc71fdc1fe561331257ed76b9218ebf7d7988fac500eff6.scope - libcontainer container 6662df8ada31c1bfedc71fdc1fe561331257ed76b9218ebf7d7988fac500eff6. May 13 12:35:57.516240 containerd[1521]: time="2025-05-13T12:35:57.516201665Z" level=info msg="StartContainer for \"6662df8ada31c1bfedc71fdc1fe561331257ed76b9218ebf7d7988fac500eff6\" returns successfully" May 13 12:35:57.714498 kubelet[2650]: E0513 12:35:57.714408 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 12:35:57.726738 kubelet[2650]: I0513 12:35:57.726641 2650 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-75bb5cb966-cndrx" podStartSLOduration=1.559893755 podStartE2EDuration="5.726626471s" podCreationTimestamp="2025-05-13 12:35:52 +0000 UTC" firstStartedPulling="2025-05-13 12:35:53.262463793 +0000 UTC m=+13.697041805" lastFinishedPulling="2025-05-13 12:35:57.429196509 +0000 UTC m=+17.863774521" observedRunningTime="2025-05-13 12:35:57.726393783 +0000 UTC m=+18.160971835" watchObservedRunningTime="2025-05-13 12:35:57.726626471 +0000 UTC m=+18.161204483" May 13 12:35:57.744670 kubelet[2650]: E0513 12:35:57.744621 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:35:57.744670 kubelet[2650]: W0513 12:35:57.744645 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:35:57.744670 kubelet[2650]: E0513 12:35:57.744663 2650 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:35:57.744856 kubelet[2650]: E0513 12:35:57.744827 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:35:57.748240 kubelet[2650]: W0513 12:35:57.744835 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:35:57.748240 kubelet[2650]: E0513 12:35:57.748235 2650 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:35:57.748419 kubelet[2650]: E0513 12:35:57.748395 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:35:57.748419 kubelet[2650]: W0513 12:35:57.748407 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:35:57.748419 kubelet[2650]: E0513 12:35:57.748416 2650 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:35:57.748549 kubelet[2650]: E0513 12:35:57.748532 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:35:57.748549 kubelet[2650]: W0513 12:35:57.748543 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:35:57.748599 kubelet[2650]: E0513 12:35:57.748550 2650 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:35:57.748693 kubelet[2650]: E0513 12:35:57.748671 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:35:57.748693 kubelet[2650]: W0513 12:35:57.748682 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:35:57.748693 kubelet[2650]: E0513 12:35:57.748689 2650 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:35:57.748817 kubelet[2650]: E0513 12:35:57.748798 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:35:57.748817 kubelet[2650]: W0513 12:35:57.748810 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:35:57.748865 kubelet[2650]: E0513 12:35:57.748818 2650 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:35:57.748960 kubelet[2650]: E0513 12:35:57.748948 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:35:57.748960 kubelet[2650]: W0513 12:35:57.748958 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:35:57.749013 kubelet[2650]: E0513 12:35:57.748967 2650 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:35:57.749099 kubelet[2650]: E0513 12:35:57.749077 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:35:57.749099 kubelet[2650]: W0513 12:35:57.749087 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:35:57.749099 kubelet[2650]: E0513 12:35:57.749096 2650 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:35:57.749237 kubelet[2650]: E0513 12:35:57.749225 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:35:57.749237 kubelet[2650]: W0513 12:35:57.749236 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:35:57.749278 kubelet[2650]: E0513 12:35:57.749243 2650 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:35:57.749359 kubelet[2650]: E0513 12:35:57.749349 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:35:57.749386 kubelet[2650]: W0513 12:35:57.749358 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:35:57.749386 kubelet[2650]: E0513 12:35:57.749367 2650 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:35:57.749481 kubelet[2650]: E0513 12:35:57.749472 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:35:57.749503 kubelet[2650]: W0513 12:35:57.749481 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:35:57.749503 kubelet[2650]: E0513 12:35:57.749488 2650 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:35:57.749604 kubelet[2650]: E0513 12:35:57.749593 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:35:57.749628 kubelet[2650]: W0513 12:35:57.749603 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:35:57.749628 kubelet[2650]: E0513 12:35:57.749611 2650 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:35:57.749740 kubelet[2650]: E0513 12:35:57.749729 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:35:57.749764 kubelet[2650]: W0513 12:35:57.749739 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:35:57.749764 kubelet[2650]: E0513 12:35:57.749747 2650 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:35:57.749887 kubelet[2650]: E0513 12:35:57.749877 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:35:57.749928 kubelet[2650]: W0513 12:35:57.749886 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:35:57.749928 kubelet[2650]: E0513 12:35:57.749893 2650 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:35:57.750027 kubelet[2650]: E0513 12:35:57.750016 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:35:57.750027 kubelet[2650]: W0513 12:35:57.750025 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:35:57.750067 kubelet[2650]: E0513 12:35:57.750032 2650 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:35:57.762484 kubelet[2650]: E0513 12:35:57.762457 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:35:57.762484 kubelet[2650]: W0513 12:35:57.762473 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:35:57.762484 kubelet[2650]: E0513 12:35:57.762486 2650 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:35:57.762679 kubelet[2650]: E0513 12:35:57.762652 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:35:57.762679 kubelet[2650]: W0513 12:35:57.762666 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:35:57.762729 kubelet[2650]: E0513 12:35:57.762680 2650 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:35:57.762855 kubelet[2650]: E0513 12:35:57.762836 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:35:57.762855 kubelet[2650]: W0513 12:35:57.762848 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:35:57.762916 kubelet[2650]: E0513 12:35:57.762861 2650 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:35:57.763065 kubelet[2650]: E0513 12:35:57.763051 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:35:57.763065 kubelet[2650]: W0513 12:35:57.763064 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:35:57.763129 kubelet[2650]: E0513 12:35:57.763074 2650 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:35:57.763241 kubelet[2650]: E0513 12:35:57.763227 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:35:57.763241 kubelet[2650]: W0513 12:35:57.763239 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:35:57.763292 kubelet[2650]: E0513 12:35:57.763247 2650 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:35:57.763377 kubelet[2650]: E0513 12:35:57.763367 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:35:57.763402 kubelet[2650]: W0513 12:35:57.763377 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:35:57.763402 kubelet[2650]: E0513 12:35:57.763384 2650 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:35:57.763688 kubelet[2650]: E0513 12:35:57.763540 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:35:57.763688 kubelet[2650]: W0513 12:35:57.763557 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:35:57.763688 kubelet[2650]: E0513 12:35:57.763566 2650 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:35:57.764055 kubelet[2650]: E0513 12:35:57.763882 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:35:57.764055 kubelet[2650]: W0513 12:35:57.763918 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:35:57.764055 kubelet[2650]: E0513 12:35:57.763932 2650 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:35:57.764343 kubelet[2650]: E0513 12:35:57.764086 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:35:57.764343 kubelet[2650]: W0513 12:35:57.764094 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:35:57.764343 kubelet[2650]: E0513 12:35:57.764161 2650 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:35:57.764343 kubelet[2650]: E0513 12:35:57.764256 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:35:57.764343 kubelet[2650]: W0513 12:35:57.764264 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:35:57.764343 kubelet[2650]: E0513 12:35:57.764326 2650 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:35:57.764463 kubelet[2650]: E0513 12:35:57.764426 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:35:57.764463 kubelet[2650]: W0513 12:35:57.764434 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:35:57.764463 kubelet[2650]: E0513 12:35:57.764445 2650 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:35:57.764733 kubelet[2650]: E0513 12:35:57.764567 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:35:57.764733 kubelet[2650]: W0513 12:35:57.764579 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:35:57.764733 kubelet[2650]: E0513 12:35:57.764595 2650 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:35:57.764953 kubelet[2650]: E0513 12:35:57.764769 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:35:57.764953 kubelet[2650]: W0513 12:35:57.764776 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:35:57.764953 kubelet[2650]: E0513 12:35:57.764784 2650 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:35:57.765248 kubelet[2650]: E0513 12:35:57.765135 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:35:57.765248 kubelet[2650]: W0513 12:35:57.765152 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:35:57.765248 kubelet[2650]: E0513 12:35:57.765170 2650 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:35:57.765392 kubelet[2650]: E0513 12:35:57.765381 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:35:57.765443 kubelet[2650]: W0513 12:35:57.765433 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:35:57.765510 kubelet[2650]: E0513 12:35:57.765498 2650 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:35:57.765714 kubelet[2650]: E0513 12:35:57.765702 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:35:57.765983 kubelet[2650]: W0513 12:35:57.765764 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:35:57.765983 kubelet[2650]: E0513 12:35:57.765785 2650 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:35:57.766052 kubelet[2650]: E0513 12:35:57.765984 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:35:57.766052 kubelet[2650]: W0513 12:35:57.765997 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:35:57.766052 kubelet[2650]: E0513 12:35:57.766013 2650 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:35:57.766156 kubelet[2650]: E0513 12:35:57.766140 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:35:57.766156 kubelet[2650]: W0513 12:35:57.766151 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:35:57.766206 kubelet[2650]: E0513 12:35:57.766159 2650 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:35:58.661049 kubelet[2650]: E0513 12:35:58.660999 2650 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7kdvf" podUID="a13363c5-db73-4c15-bc44-8be9849ef5ce" May 13 12:35:58.715703 kubelet[2650]: I0513 12:35:58.715662 2650 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 13 12:35:58.716058 kubelet[2650]: E0513 12:35:58.716044 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 12:35:58.757371 kubelet[2650]: E0513 12:35:58.757333 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:35:58.757371 kubelet[2650]: W0513 12:35:58.757358 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:35:58.757371 kubelet[2650]: E0513 12:35:58.757377 2650 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:35:58.757526 kubelet[2650]: E0513 12:35:58.757517 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:35:58.757526 kubelet[2650]: W0513 12:35:58.757526 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:35:58.757575 kubelet[2650]: E0513 12:35:58.757535 2650 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:35:58.757685 kubelet[2650]: E0513 12:35:58.757661 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:35:58.757685 kubelet[2650]: W0513 12:35:58.757671 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:35:58.757685 kubelet[2650]: E0513 12:35:58.757679 2650 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:35:58.757819 kubelet[2650]: E0513 12:35:58.757799 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:35:58.757819 kubelet[2650]: W0513 12:35:58.757810 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:35:58.757819 kubelet[2650]: E0513 12:35:58.757817 2650 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:35:58.757983 kubelet[2650]: E0513 12:35:58.757962 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:35:58.757983 kubelet[2650]: W0513 12:35:58.757974 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:35:58.757983 kubelet[2650]: E0513 12:35:58.757982 2650 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:35:58.758153 kubelet[2650]: E0513 12:35:58.758129 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:35:58.758153 kubelet[2650]: W0513 12:35:58.758145 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:35:58.758202 kubelet[2650]: E0513 12:35:58.758154 2650 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:35:58.758294 kubelet[2650]: E0513 12:35:58.758282 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:35:58.758294 kubelet[2650]: W0513 12:35:58.758292 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:35:58.758337 kubelet[2650]: E0513 12:35:58.758299 2650 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:35:58.758423 kubelet[2650]: E0513 12:35:58.758414 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:35:58.758448 kubelet[2650]: W0513 12:35:58.758423 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:35:58.758448 kubelet[2650]: E0513 12:35:58.758430 2650 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:35:58.758612 kubelet[2650]: E0513 12:35:58.758600 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:35:58.758612 kubelet[2650]: W0513 12:35:58.758611 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:35:58.758667 kubelet[2650]: E0513 12:35:58.758618 2650 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:35:58.758746 kubelet[2650]: E0513 12:35:58.758734 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:35:58.758746 kubelet[2650]: W0513 12:35:58.758744 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:35:58.758796 kubelet[2650]: E0513 12:35:58.758753 2650 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:35:58.758868 kubelet[2650]: E0513 12:35:58.758858 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:35:58.758892 kubelet[2650]: W0513 12:35:58.758868 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:35:58.758892 kubelet[2650]: E0513 12:35:58.758875 2650 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:35:58.759017 kubelet[2650]: E0513 12:35:58.759005 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:35:58.759017 kubelet[2650]: W0513 12:35:58.759015 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:35:58.759061 kubelet[2650]: E0513 12:35:58.759022 2650 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:35:58.759159 kubelet[2650]: E0513 12:35:58.759147 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:35:58.759159 kubelet[2650]: W0513 12:35:58.759157 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:35:58.759205 kubelet[2650]: E0513 12:35:58.759165 2650 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:35:58.759291 kubelet[2650]: E0513 12:35:58.759281 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:35:58.759316 kubelet[2650]: W0513 12:35:58.759291 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:35:58.759316 kubelet[2650]: E0513 12:35:58.759298 2650 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:35:58.759416 kubelet[2650]: E0513 12:35:58.759406 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:35:58.759437 kubelet[2650]: W0513 12:35:58.759416 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:35:58.759437 kubelet[2650]: E0513 12:35:58.759422 2650 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:35:58.770758 kubelet[2650]: E0513 12:35:58.770732 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:35:58.770758 kubelet[2650]: W0513 12:35:58.770749 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:35:58.770840 kubelet[2650]: E0513 12:35:58.770762 2650 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:35:58.770992 kubelet[2650]: E0513 12:35:58.770972 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:35:58.770992 kubelet[2650]: W0513 12:35:58.770986 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:35:58.771059 kubelet[2650]: E0513 12:35:58.771000 2650 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:35:58.771180 kubelet[2650]: E0513 12:35:58.771156 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:35:58.771180 kubelet[2650]: W0513 12:35:58.771171 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:35:58.771229 kubelet[2650]: E0513 12:35:58.771187 2650 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:35:58.771332 kubelet[2650]: E0513 12:35:58.771322 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:35:58.771356 kubelet[2650]: W0513 12:35:58.771332 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:35:58.771356 kubelet[2650]: E0513 12:35:58.771344 2650 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:35:58.771477 kubelet[2650]: E0513 12:35:58.771466 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:35:58.771477 kubelet[2650]: W0513 12:35:58.771475 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:35:58.771527 kubelet[2650]: E0513 12:35:58.771487 2650 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:35:58.771637 kubelet[2650]: E0513 12:35:58.771627 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:35:58.771664 kubelet[2650]: W0513 12:35:58.771638 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:35:58.771664 kubelet[2650]: E0513 12:35:58.771650 2650 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:35:58.771856 kubelet[2650]: E0513 12:35:58.771842 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:35:58.771856 kubelet[2650]: W0513 12:35:58.771855 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:35:58.771936 kubelet[2650]: E0513 12:35:58.771870 2650 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:35:58.772100 kubelet[2650]: E0513 12:35:58.772084 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:35:58.772100 kubelet[2650]: W0513 12:35:58.772099 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:35:58.772158 kubelet[2650]: E0513 12:35:58.772113 2650 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:35:58.772341 kubelet[2650]: E0513 12:35:58.772315 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:35:58.772341 kubelet[2650]: W0513 12:35:58.772329 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:35:58.772438 kubelet[2650]: E0513 12:35:58.772372 2650 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:35:58.772533 kubelet[2650]: E0513 12:35:58.772520 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:35:58.772533 kubelet[2650]: W0513 12:35:58.772531 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:35:58.772589 kubelet[2650]: E0513 12:35:58.772575 2650 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:35:58.772709 kubelet[2650]: E0513 12:35:58.772699 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:35:58.772735 kubelet[2650]: W0513 12:35:58.772712 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:35:58.772735 kubelet[2650]: E0513 12:35:58.772730 2650 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:35:58.772918 kubelet[2650]: E0513 12:35:58.772891 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:35:58.772918 kubelet[2650]: W0513 12:35:58.772917 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:35:58.772971 kubelet[2650]: E0513 12:35:58.772938 2650 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:35:58.773130 kubelet[2650]: E0513 12:35:58.773117 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:35:58.773130 kubelet[2650]: W0513 12:35:58.773129 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:35:58.773198 kubelet[2650]: E0513 12:35:58.773148 2650 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:35:58.773497 kubelet[2650]: E0513 12:35:58.773400 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:35:58.773497 kubelet[2650]: W0513 12:35:58.773414 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:35:58.773497 kubelet[2650]: E0513 12:35:58.773432 2650 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:35:58.773638 kubelet[2650]: E0513 12:35:58.773626 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:35:58.773693 kubelet[2650]: W0513 12:35:58.773682 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:35:58.773756 kubelet[2650]: E0513 12:35:58.773745 2650 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:35:58.773905 kubelet[2650]: E0513 12:35:58.773879 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:35:58.773905 kubelet[2650]: W0513 12:35:58.773892 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:35:58.773973 kubelet[2650]: E0513 12:35:58.773942 2650 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:35:58.774127 kubelet[2650]: E0513 12:35:58.774114 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:35:58.774127 kubelet[2650]: W0513 12:35:58.774126 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:35:58.774191 kubelet[2650]: E0513 12:35:58.774134 2650 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:35:58.774395 kubelet[2650]: E0513 12:35:58.774382 2650 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:35:58.774395 kubelet[2650]: W0513 12:35:58.774394 2650 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:35:58.774455 kubelet[2650]: E0513 12:35:58.774430 2650 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:35:59.162300 containerd[1521]: time="2025-05-13T12:35:59.162255785Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:35:59.163046 containerd[1521]: time="2025-05-13T12:35:59.162834483Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3: active requests=0, bytes read=5122903" May 13 12:35:59.163648 containerd[1521]: time="2025-05-13T12:35:59.163613347Z" level=info msg="ImageCreate event name:\"sha256:dd8e710a588cc6f5834c4d84f7e12458efae593d3dfe527ca9e757c89239ecb8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:35:59.165794 containerd[1521]: time="2025-05-13T12:35:59.165758814Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:35:59.166643 containerd[1521]: time="2025-05-13T12:35:59.166604880Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" with image id \"sha256:dd8e710a588cc6f5834c4d84f7e12458efae593d3dfe527ca9e757c89239ecb8\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\", size \"6492045\" in 1.735316139s" May 13 12:35:59.166643 containerd[1521]: time="2025-05-13T12:35:59.166637722Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" returns image reference \"sha256:dd8e710a588cc6f5834c4d84f7e12458efae593d3dfe527ca9e757c89239ecb8\"" May 13 12:35:59.175858 containerd[1521]: time="2025-05-13T12:35:59.175826369Z" level=info msg="CreateContainer within sandbox \"7ae2aa0b138978c98df44c05adaffb7f97130d79aac8a0696da4c4a751e31cbb\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 13 12:35:59.182053 containerd[1521]: time="2025-05-13T12:35:59.182014042Z" level=info msg="Container 24f5e465eb48499df8a2862634a4c4189db62c249f3c274bd2f8cde8da8ad4aa: CDI devices from CRI Config.CDIDevices: []" May 13 12:35:59.197098 containerd[1521]: time="2025-05-13T12:35:59.197052472Z" level=info msg="CreateContainer within sandbox \"7ae2aa0b138978c98df44c05adaffb7f97130d79aac8a0696da4c4a751e31cbb\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"24f5e465eb48499df8a2862634a4c4189db62c249f3c274bd2f8cde8da8ad4aa\"" May 13 12:35:59.197775 containerd[1521]: time="2025-05-13T12:35:59.197552607Z" level=info msg="StartContainer for \"24f5e465eb48499df8a2862634a4c4189db62c249f3c274bd2f8cde8da8ad4aa\"" May 13 12:35:59.199434 containerd[1521]: time="2025-05-13T12:35:59.199406185Z" level=info msg="connecting to shim 24f5e465eb48499df8a2862634a4c4189db62c249f3c274bd2f8cde8da8ad4aa" address="unix:///run/containerd/s/edf2aff33a041cc4885815bd91ed4e46ecfcba70a1d2f342e0e41dd610b5576a" protocol=ttrpc version=3 May 13 12:35:59.226066 systemd[1]: Started cri-containerd-24f5e465eb48499df8a2862634a4c4189db62c249f3c274bd2f8cde8da8ad4aa.scope - libcontainer container 24f5e465eb48499df8a2862634a4c4189db62c249f3c274bd2f8cde8da8ad4aa. May 13 12:35:59.259304 containerd[1521]: time="2025-05-13T12:35:59.258866923Z" level=info msg="StartContainer for \"24f5e465eb48499df8a2862634a4c4189db62c249f3c274bd2f8cde8da8ad4aa\" returns successfully" May 13 12:35:59.293476 systemd[1]: cri-containerd-24f5e465eb48499df8a2862634a4c4189db62c249f3c274bd2f8cde8da8ad4aa.scope: Deactivated successfully. May 13 12:35:59.293766 systemd[1]: cri-containerd-24f5e465eb48499df8a2862634a4c4189db62c249f3c274bd2f8cde8da8ad4aa.scope: Consumed 43ms CPU time, 7.8M memory peak, 6.2M written to disk. May 13 12:35:59.323655 containerd[1521]: time="2025-05-13T12:35:59.323616267Z" level=info msg="TaskExit event in podsandbox handler container_id:\"24f5e465eb48499df8a2862634a4c4189db62c249f3c274bd2f8cde8da8ad4aa\" id:\"24f5e465eb48499df8a2862634a4c4189db62c249f3c274bd2f8cde8da8ad4aa\" pid:3318 exited_at:{seconds:1747139759 nanos:310891789}" May 13 12:35:59.324407 containerd[1521]: time="2025-05-13T12:35:59.324368930Z" level=info msg="received exit event container_id:\"24f5e465eb48499df8a2862634a4c4189db62c249f3c274bd2f8cde8da8ad4aa\" id:\"24f5e465eb48499df8a2862634a4c4189db62c249f3c274bd2f8cde8da8ad4aa\" pid:3318 exited_at:{seconds:1747139759 nanos:310891789}" May 13 12:35:59.355625 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-24f5e465eb48499df8a2862634a4c4189db62c249f3c274bd2f8cde8da8ad4aa-rootfs.mount: Deactivated successfully. May 13 12:35:59.718795 kubelet[2650]: E0513 12:35:59.718764 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 12:35:59.720225 containerd[1521]: time="2025-05-13T12:35:59.719463236Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\"" May 13 12:36:00.661471 kubelet[2650]: E0513 12:36:00.661076 2650 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7kdvf" podUID="a13363c5-db73-4c15-bc44-8be9849ef5ce" May 13 12:36:02.660741 kubelet[2650]: E0513 12:36:02.660697 2650 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7kdvf" podUID="a13363c5-db73-4c15-bc44-8be9849ef5ce" May 13 12:36:04.487150 kubelet[2650]: I0513 12:36:04.487119 2650 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 13 12:36:04.487732 kubelet[2650]: E0513 12:36:04.487411 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 12:36:04.660687 kubelet[2650]: E0513 12:36:04.660631 2650 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7kdvf" podUID="a13363c5-db73-4c15-bc44-8be9849ef5ce" May 13 12:36:04.727052 kubelet[2650]: E0513 12:36:04.726994 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 12:36:06.404622 containerd[1521]: time="2025-05-13T12:36:06.404574788Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:36:06.405169 containerd[1521]: time="2025-05-13T12:36:06.405135201Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.3: active requests=0, bytes read=91256270" May 13 12:36:06.405746 containerd[1521]: time="2025-05-13T12:36:06.405697493Z" level=info msg="ImageCreate event name:\"sha256:add6372545fb406bb017769f222d84c50549ce13e3b19f1fbaee3d8a4aaef627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:36:06.407525 containerd[1521]: time="2025-05-13T12:36:06.407473974Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:36:06.408404 containerd[1521]: time="2025-05-13T12:36:06.408316473Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.3\" with image id \"sha256:add6372545fb406bb017769f222d84c50549ce13e3b19f1fbaee3d8a4aaef627\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\", size \"92625452\" in 6.688804596s" May 13 12:36:06.408404 containerd[1521]: time="2025-05-13T12:36:06.408350274Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\" returns image reference \"sha256:add6372545fb406bb017769f222d84c50549ce13e3b19f1fbaee3d8a4aaef627\"" May 13 12:36:06.410595 containerd[1521]: time="2025-05-13T12:36:06.410461282Z" level=info msg="CreateContainer within sandbox \"7ae2aa0b138978c98df44c05adaffb7f97130d79aac8a0696da4c4a751e31cbb\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 13 12:36:06.422969 containerd[1521]: time="2025-05-13T12:36:06.422931567Z" level=info msg="Container eb1502fd44222035d239a4f994c151080b16ffe0ec8df7e186b8d1392e63ce24: CDI devices from CRI Config.CDIDevices: []" May 13 12:36:06.430166 containerd[1521]: time="2025-05-13T12:36:06.430128371Z" level=info msg="CreateContainer within sandbox \"7ae2aa0b138978c98df44c05adaffb7f97130d79aac8a0696da4c4a751e31cbb\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"eb1502fd44222035d239a4f994c151080b16ffe0ec8df7e186b8d1392e63ce24\"" May 13 12:36:06.430579 containerd[1521]: time="2025-05-13T12:36:06.430555941Z" level=info msg="StartContainer for \"eb1502fd44222035d239a4f994c151080b16ffe0ec8df7e186b8d1392e63ce24\"" May 13 12:36:06.432344 containerd[1521]: time="2025-05-13T12:36:06.432274981Z" level=info msg="connecting to shim eb1502fd44222035d239a4f994c151080b16ffe0ec8df7e186b8d1392e63ce24" address="unix:///run/containerd/s/edf2aff33a041cc4885815bd91ed4e46ecfcba70a1d2f342e0e41dd610b5576a" protocol=ttrpc version=3 May 13 12:36:06.454035 systemd[1]: Started cri-containerd-eb1502fd44222035d239a4f994c151080b16ffe0ec8df7e186b8d1392e63ce24.scope - libcontainer container eb1502fd44222035d239a4f994c151080b16ffe0ec8df7e186b8d1392e63ce24. May 13 12:36:06.522459 containerd[1521]: time="2025-05-13T12:36:06.522377078Z" level=info msg="StartContainer for \"eb1502fd44222035d239a4f994c151080b16ffe0ec8df7e186b8d1392e63ce24\" returns successfully" May 13 12:36:06.661269 kubelet[2650]: E0513 12:36:06.660714 2650 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7kdvf" podUID="a13363c5-db73-4c15-bc44-8be9849ef5ce" May 13 12:36:06.737978 kubelet[2650]: E0513 12:36:06.737864 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 12:36:07.050138 systemd[1]: cri-containerd-eb1502fd44222035d239a4f994c151080b16ffe0ec8df7e186b8d1392e63ce24.scope: Deactivated successfully. May 13 12:36:07.050427 systemd[1]: cri-containerd-eb1502fd44222035d239a4f994c151080b16ffe0ec8df7e186b8d1392e63ce24.scope: Consumed 440ms CPU time, 158.1M memory peak, 4K read from disk, 150.3M written to disk. May 13 12:36:07.052381 containerd[1521]: time="2025-05-13T12:36:07.052338416Z" level=info msg="TaskExit event in podsandbox handler container_id:\"eb1502fd44222035d239a4f994c151080b16ffe0ec8df7e186b8d1392e63ce24\" id:\"eb1502fd44222035d239a4f994c151080b16ffe0ec8df7e186b8d1392e63ce24\" pid:3381 exited_at:{seconds:1747139767 nanos:52036409}" May 13 12:36:07.065251 containerd[1521]: time="2025-05-13T12:36:07.065208058Z" level=info msg="received exit event container_id:\"eb1502fd44222035d239a4f994c151080b16ffe0ec8df7e186b8d1392e63ce24\" id:\"eb1502fd44222035d239a4f994c151080b16ffe0ec8df7e186b8d1392e63ce24\" pid:3381 exited_at:{seconds:1747139767 nanos:52036409}" May 13 12:36:07.073998 kubelet[2650]: I0513 12:36:07.073946 2650 kubelet_node_status.go:502] "Fast updating node status as it just became ready" May 13 12:36:07.083447 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-eb1502fd44222035d239a4f994c151080b16ffe0ec8df7e186b8d1392e63ce24-rootfs.mount: Deactivated successfully. May 13 12:36:07.143147 systemd[1]: Created slice kubepods-burstable-podca904cd7_1e92_491d_aeb7_3f7c4a33b320.slice - libcontainer container kubepods-burstable-podca904cd7_1e92_491d_aeb7_3f7c4a33b320.slice. May 13 12:36:07.153556 systemd[1]: Created slice kubepods-besteffort-pod69116814_2f5c_4ee2_a520_4ee35d7c81a9.slice - libcontainer container kubepods-besteffort-pod69116814_2f5c_4ee2_a520_4ee35d7c81a9.slice. May 13 12:36:07.158232 systemd[1]: Created slice kubepods-besteffort-podb0c50f51_4a5b_495e_9b69_7356538618c6.slice - libcontainer container kubepods-besteffort-podb0c50f51_4a5b_495e_9b69_7356538618c6.slice. May 13 12:36:07.162059 systemd[1]: Created slice kubepods-besteffort-podcf6950d9_0ba5_43dc_9de9_e60cd82a7be3.slice - libcontainer container kubepods-besteffort-podcf6950d9_0ba5_43dc_9de9_e60cd82a7be3.slice. May 13 12:36:07.165630 systemd[1]: Created slice kubepods-burstable-pod22b2f084_94f7_495c_ac36_2ef6e79aa0ee.slice - libcontainer container kubepods-burstable-pod22b2f084_94f7_495c_ac36_2ef6e79aa0ee.slice. May 13 12:36:07.233767 kubelet[2650]: I0513 12:36:07.233730 2650 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-26vh6\" (UniqueName: \"kubernetes.io/projected/b0c50f51-4a5b-495e-9b69-7356538618c6-kube-api-access-26vh6\") pod \"calico-apiserver-86c67ddb4-6s9k8\" (UID: \"b0c50f51-4a5b-495e-9b69-7356538618c6\") " pod="calico-apiserver/calico-apiserver-86c67ddb4-6s9k8" May 13 12:36:07.234183 kubelet[2650]: I0513 12:36:07.234022 2650 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xlxgf\" (UniqueName: \"kubernetes.io/projected/cf6950d9-0ba5-43dc-9de9-e60cd82a7be3-kube-api-access-xlxgf\") pod \"calico-apiserver-86c67ddb4-djgp4\" (UID: \"cf6950d9-0ba5-43dc-9de9-e60cd82a7be3\") " pod="calico-apiserver/calico-apiserver-86c67ddb4-djgp4" May 13 12:36:07.234183 kubelet[2650]: I0513 12:36:07.234051 2650 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/22b2f084-94f7-495c-ac36-2ef6e79aa0ee-config-volume\") pod \"coredns-668d6bf9bc-x5r8q\" (UID: \"22b2f084-94f7-495c-ac36-2ef6e79aa0ee\") " pod="kube-system/coredns-668d6bf9bc-x5r8q" May 13 12:36:07.234183 kubelet[2650]: I0513 12:36:07.234069 2650 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p8g77\" (UniqueName: \"kubernetes.io/projected/22b2f084-94f7-495c-ac36-2ef6e79aa0ee-kube-api-access-p8g77\") pod \"coredns-668d6bf9bc-x5r8q\" (UID: \"22b2f084-94f7-495c-ac36-2ef6e79aa0ee\") " pod="kube-system/coredns-668d6bf9bc-x5r8q" May 13 12:36:07.234183 kubelet[2650]: I0513 12:36:07.234091 2650 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tkksj\" (UniqueName: \"kubernetes.io/projected/ca904cd7-1e92-491d-aeb7-3f7c4a33b320-kube-api-access-tkksj\") pod \"coredns-668d6bf9bc-z9526\" (UID: \"ca904cd7-1e92-491d-aeb7-3f7c4a33b320\") " pod="kube-system/coredns-668d6bf9bc-z9526" May 13 12:36:07.234183 kubelet[2650]: I0513 12:36:07.234146 2650 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/69116814-2f5c-4ee2-a520-4ee35d7c81a9-tigera-ca-bundle\") pod \"calico-kube-controllers-87fb474fb-wjmnx\" (UID: \"69116814-2f5c-4ee2-a520-4ee35d7c81a9\") " pod="calico-system/calico-kube-controllers-87fb474fb-wjmnx" May 13 12:36:07.234462 kubelet[2650]: I0513 12:36:07.234162 2650 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/cf6950d9-0ba5-43dc-9de9-e60cd82a7be3-calico-apiserver-certs\") pod \"calico-apiserver-86c67ddb4-djgp4\" (UID: \"cf6950d9-0ba5-43dc-9de9-e60cd82a7be3\") " pod="calico-apiserver/calico-apiserver-86c67ddb4-djgp4" May 13 12:36:07.234462 kubelet[2650]: I0513 12:36:07.234293 2650 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ca904cd7-1e92-491d-aeb7-3f7c4a33b320-config-volume\") pod \"coredns-668d6bf9bc-z9526\" (UID: \"ca904cd7-1e92-491d-aeb7-3f7c4a33b320\") " pod="kube-system/coredns-668d6bf9bc-z9526" May 13 12:36:07.234462 kubelet[2650]: I0513 12:36:07.234321 2650 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r5k8g\" (UniqueName: \"kubernetes.io/projected/69116814-2f5c-4ee2-a520-4ee35d7c81a9-kube-api-access-r5k8g\") pod \"calico-kube-controllers-87fb474fb-wjmnx\" (UID: \"69116814-2f5c-4ee2-a520-4ee35d7c81a9\") " pod="calico-system/calico-kube-controllers-87fb474fb-wjmnx" May 13 12:36:07.234649 kubelet[2650]: I0513 12:36:07.234584 2650 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/b0c50f51-4a5b-495e-9b69-7356538618c6-calico-apiserver-certs\") pod \"calico-apiserver-86c67ddb4-6s9k8\" (UID: \"b0c50f51-4a5b-495e-9b69-7356538618c6\") " pod="calico-apiserver/calico-apiserver-86c67ddb4-6s9k8" May 13 12:36:07.449997 kubelet[2650]: E0513 12:36:07.449957 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 12:36:07.451220 containerd[1521]: time="2025-05-13T12:36:07.451092997Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-z9526,Uid:ca904cd7-1e92-491d-aeb7-3f7c4a33b320,Namespace:kube-system,Attempt:0,}" May 13 12:36:07.456658 containerd[1521]: time="2025-05-13T12:36:07.456617838Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-87fb474fb-wjmnx,Uid:69116814-2f5c-4ee2-a520-4ee35d7c81a9,Namespace:calico-system,Attempt:0,}" May 13 12:36:07.461691 containerd[1521]: time="2025-05-13T12:36:07.461661988Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-86c67ddb4-6s9k8,Uid:b0c50f51-4a5b-495e-9b69-7356538618c6,Namespace:calico-apiserver,Attempt:0,}" May 13 12:36:07.465412 containerd[1521]: time="2025-05-13T12:36:07.465380070Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-86c67ddb4-djgp4,Uid:cf6950d9-0ba5-43dc-9de9-e60cd82a7be3,Namespace:calico-apiserver,Attempt:0,}" May 13 12:36:07.469322 kubelet[2650]: E0513 12:36:07.469194 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 12:36:07.470304 containerd[1521]: time="2025-05-13T12:36:07.470048612Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-x5r8q,Uid:22b2f084-94f7-495c-ac36-2ef6e79aa0ee,Namespace:kube-system,Attempt:0,}" May 13 12:36:07.750423 kubelet[2650]: E0513 12:36:07.747236 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 12:36:07.751775 containerd[1521]: time="2025-05-13T12:36:07.750893128Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\"" May 13 12:36:07.831750 containerd[1521]: time="2025-05-13T12:36:07.831706700Z" level=error msg="Failed to destroy network for sandbox \"1ea9f25ddbdbdec76ff293f019507dccd4e1b00b81f85542d3648bb54eb0ccb0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 12:36:07.836942 containerd[1521]: time="2025-05-13T12:36:07.836146077Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-87fb474fb-wjmnx,Uid:69116814-2f5c-4ee2-a520-4ee35d7c81a9,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1ea9f25ddbdbdec76ff293f019507dccd4e1b00b81f85542d3648bb54eb0ccb0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 12:36:07.837174 containerd[1521]: time="2025-05-13T12:36:07.836376682Z" level=error msg="Failed to destroy network for sandbox \"7143c91be4e8643e8d46a983b8e294001d3fbe50187e6cd4b9d2c0f6b7502ca0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 12:36:07.839512 containerd[1521]: time="2025-05-13T12:36:07.839470350Z" level=error msg="Failed to destroy network for sandbox \"a6ee57c7b5420c028132bb9dbf937f6c46055a53b4ac068eb3c293d7c4d3092b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 12:36:07.840947 containerd[1521]: time="2025-05-13T12:36:07.840525853Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-x5r8q,Uid:22b2f084-94f7-495c-ac36-2ef6e79aa0ee,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7143c91be4e8643e8d46a983b8e294001d3fbe50187e6cd4b9d2c0f6b7502ca0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 12:36:07.841540 kubelet[2650]: E0513 12:36:07.841144 2650 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7143c91be4e8643e8d46a983b8e294001d3fbe50187e6cd4b9d2c0f6b7502ca0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 12:36:07.841540 kubelet[2650]: E0513 12:36:07.841224 2650 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7143c91be4e8643e8d46a983b8e294001d3fbe50187e6cd4b9d2c0f6b7502ca0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-x5r8q" May 13 12:36:07.841540 kubelet[2650]: E0513 12:36:07.841243 2650 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7143c91be4e8643e8d46a983b8e294001d3fbe50187e6cd4b9d2c0f6b7502ca0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-x5r8q" May 13 12:36:07.841690 kubelet[2650]: E0513 12:36:07.841297 2650 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-x5r8q_kube-system(22b2f084-94f7-495c-ac36-2ef6e79aa0ee)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-x5r8q_kube-system(22b2f084-94f7-495c-ac36-2ef6e79aa0ee)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7143c91be4e8643e8d46a983b8e294001d3fbe50187e6cd4b9d2c0f6b7502ca0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-x5r8q" podUID="22b2f084-94f7-495c-ac36-2ef6e79aa0ee" May 13 12:36:07.841690 kubelet[2650]: E0513 12:36:07.841383 2650 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1ea9f25ddbdbdec76ff293f019507dccd4e1b00b81f85542d3648bb54eb0ccb0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 12:36:07.841690 kubelet[2650]: E0513 12:36:07.841440 2650 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1ea9f25ddbdbdec76ff293f019507dccd4e1b00b81f85542d3648bb54eb0ccb0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-87fb474fb-wjmnx" May 13 12:36:07.841780 kubelet[2650]: E0513 12:36:07.841460 2650 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1ea9f25ddbdbdec76ff293f019507dccd4e1b00b81f85542d3648bb54eb0ccb0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-87fb474fb-wjmnx" May 13 12:36:07.841780 kubelet[2650]: E0513 12:36:07.841499 2650 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-87fb474fb-wjmnx_calico-system(69116814-2f5c-4ee2-a520-4ee35d7c81a9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-87fb474fb-wjmnx_calico-system(69116814-2f5c-4ee2-a520-4ee35d7c81a9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1ea9f25ddbdbdec76ff293f019507dccd4e1b00b81f85542d3648bb54eb0ccb0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-87fb474fb-wjmnx" podUID="69116814-2f5c-4ee2-a520-4ee35d7c81a9" May 13 12:36:07.842809 containerd[1521]: time="2025-05-13T12:36:07.842777182Z" level=error msg="Failed to destroy network for sandbox \"b945faf4d2a67e66838bea5f6fbecac515a6ff0124acbf749b1f1e21f1d3132b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 12:36:07.843096 containerd[1521]: time="2025-05-13T12:36:07.843053188Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-86c67ddb4-6s9k8,Uid:b0c50f51-4a5b-495e-9b69-7356538618c6,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a6ee57c7b5420c028132bb9dbf937f6c46055a53b4ac068eb3c293d7c4d3092b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 12:36:07.843758 kubelet[2650]: E0513 12:36:07.843728 2650 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a6ee57c7b5420c028132bb9dbf937f6c46055a53b4ac068eb3c293d7c4d3092b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 12:36:07.843827 kubelet[2650]: E0513 12:36:07.843806 2650 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a6ee57c7b5420c028132bb9dbf937f6c46055a53b4ac068eb3c293d7c4d3092b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-86c67ddb4-6s9k8" May 13 12:36:07.843865 kubelet[2650]: E0513 12:36:07.843826 2650 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a6ee57c7b5420c028132bb9dbf937f6c46055a53b4ac068eb3c293d7c4d3092b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-86c67ddb4-6s9k8" May 13 12:36:07.844032 kubelet[2650]: E0513 12:36:07.843868 2650 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-86c67ddb4-6s9k8_calico-apiserver(b0c50f51-4a5b-495e-9b69-7356538618c6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-86c67ddb4-6s9k8_calico-apiserver(b0c50f51-4a5b-495e-9b69-7356538618c6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a6ee57c7b5420c028132bb9dbf937f6c46055a53b4ac068eb3c293d7c4d3092b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-86c67ddb4-6s9k8" podUID="b0c50f51-4a5b-495e-9b69-7356538618c6" May 13 12:36:07.844121 containerd[1521]: time="2025-05-13T12:36:07.843956008Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-z9526,Uid:ca904cd7-1e92-491d-aeb7-3f7c4a33b320,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b945faf4d2a67e66838bea5f6fbecac515a6ff0124acbf749b1f1e21f1d3132b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 12:36:07.844386 kubelet[2650]: E0513 12:36:07.844100 2650 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b945faf4d2a67e66838bea5f6fbecac515a6ff0124acbf749b1f1e21f1d3132b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 12:36:07.844432 kubelet[2650]: E0513 12:36:07.844401 2650 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b945faf4d2a67e66838bea5f6fbecac515a6ff0124acbf749b1f1e21f1d3132b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-z9526" May 13 12:36:07.844458 kubelet[2650]: E0513 12:36:07.844416 2650 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b945faf4d2a67e66838bea5f6fbecac515a6ff0124acbf749b1f1e21f1d3132b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-z9526" May 13 12:36:07.844480 kubelet[2650]: E0513 12:36:07.844463 2650 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-z9526_kube-system(ca904cd7-1e92-491d-aeb7-3f7c4a33b320)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-z9526_kube-system(ca904cd7-1e92-491d-aeb7-3f7c4a33b320)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b945faf4d2a67e66838bea5f6fbecac515a6ff0124acbf749b1f1e21f1d3132b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-z9526" podUID="ca904cd7-1e92-491d-aeb7-3f7c4a33b320" May 13 12:36:07.847696 containerd[1521]: time="2025-05-13T12:36:07.847606848Z" level=error msg="Failed to destroy network for sandbox \"9a999fa14f7280cd527f13415a0b217db0391ab7202cbc0dbd336ef99da3a9d1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 12:36:07.848573 containerd[1521]: time="2025-05-13T12:36:07.848536509Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-86c67ddb4-djgp4,Uid:cf6950d9-0ba5-43dc-9de9-e60cd82a7be3,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a999fa14f7280cd527f13415a0b217db0391ab7202cbc0dbd336ef99da3a9d1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 12:36:07.849379 kubelet[2650]: E0513 12:36:07.848972 2650 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a999fa14f7280cd527f13415a0b217db0391ab7202cbc0dbd336ef99da3a9d1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 12:36:07.849778 kubelet[2650]: E0513 12:36:07.849746 2650 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a999fa14f7280cd527f13415a0b217db0391ab7202cbc0dbd336ef99da3a9d1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-86c67ddb4-djgp4" May 13 12:36:07.849812 kubelet[2650]: E0513 12:36:07.849787 2650 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a999fa14f7280cd527f13415a0b217db0391ab7202cbc0dbd336ef99da3a9d1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-86c67ddb4-djgp4" May 13 12:36:07.849865 kubelet[2650]: E0513 12:36:07.849839 2650 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-86c67ddb4-djgp4_calico-apiserver(cf6950d9-0ba5-43dc-9de9-e60cd82a7be3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-86c67ddb4-djgp4_calico-apiserver(cf6950d9-0ba5-43dc-9de9-e60cd82a7be3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9a999fa14f7280cd527f13415a0b217db0391ab7202cbc0dbd336ef99da3a9d1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-86c67ddb4-djgp4" podUID="cf6950d9-0ba5-43dc-9de9-e60cd82a7be3" May 13 12:36:08.423269 systemd[1]: run-netns-cni\x2dfed1d707\x2d55a1\x2d970f\x2dd996\x2de9026460a621.mount: Deactivated successfully. May 13 12:36:08.423367 systemd[1]: run-netns-cni\x2daa62c5e0\x2da1af\x2d1c43\x2d2450\x2dbe55a382f7af.mount: Deactivated successfully. May 13 12:36:08.423423 systemd[1]: run-netns-cni\x2de343a8aa\x2d13e4\x2d9a03\x2df58a\x2d3980f999a3e6.mount: Deactivated successfully. May 13 12:36:08.423467 systemd[1]: run-netns-cni\x2d9f1602c5\x2d8b73\x2d30b8\x2defef\x2dd7ec97b15f28.mount: Deactivated successfully. May 13 12:36:08.423508 systemd[1]: run-netns-cni\x2d920de1ec\x2d9589\x2d848f\x2d42f2\x2df190b299ebfd.mount: Deactivated successfully. May 13 12:36:08.665219 systemd[1]: Created slice kubepods-besteffort-poda13363c5_db73_4c15_bc44_8be9849ef5ce.slice - libcontainer container kubepods-besteffort-poda13363c5_db73_4c15_bc44_8be9849ef5ce.slice. May 13 12:36:08.667169 containerd[1521]: time="2025-05-13T12:36:08.667132318Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7kdvf,Uid:a13363c5-db73-4c15-bc44-8be9849ef5ce,Namespace:calico-system,Attempt:0,}" May 13 12:36:08.715409 containerd[1521]: time="2025-05-13T12:36:08.715082608Z" level=error msg="Failed to destroy network for sandbox \"f1312a6416e698f7049c217d7fb1cc3af152001ae30b067fc5465277b01ae9c9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 12:36:08.716265 containerd[1521]: time="2025-05-13T12:36:08.716227952Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7kdvf,Uid:a13363c5-db73-4c15-bc44-8be9849ef5ce,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f1312a6416e698f7049c217d7fb1cc3af152001ae30b067fc5465277b01ae9c9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 12:36:08.716619 kubelet[2650]: E0513 12:36:08.716517 2650 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f1312a6416e698f7049c217d7fb1cc3af152001ae30b067fc5465277b01ae9c9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 12:36:08.716760 kubelet[2650]: E0513 12:36:08.716598 2650 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f1312a6416e698f7049c217d7fb1cc3af152001ae30b067fc5465277b01ae9c9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7kdvf" May 13 12:36:08.716760 kubelet[2650]: E0513 12:36:08.716699 2650 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f1312a6416e698f7049c217d7fb1cc3af152001ae30b067fc5465277b01ae9c9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7kdvf" May 13 12:36:08.716858 kubelet[2650]: E0513 12:36:08.716832 2650 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-7kdvf_calico-system(a13363c5-db73-4c15-bc44-8be9849ef5ce)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-7kdvf_calico-system(a13363c5-db73-4c15-bc44-8be9849ef5ce)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f1312a6416e698f7049c217d7fb1cc3af152001ae30b067fc5465277b01ae9c9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-7kdvf" podUID="a13363c5-db73-4c15-bc44-8be9849ef5ce" May 13 12:36:08.717033 systemd[1]: run-netns-cni\x2d074dc54b\x2dc76d\x2d6377\x2d75fc\x2d4607bd72246f.mount: Deactivated successfully. May 13 12:36:09.300236 systemd[1]: Started sshd@7-10.0.0.39:22-10.0.0.1:33790.service - OpenSSH per-connection server daemon (10.0.0.1:33790). May 13 12:36:09.352722 sshd[3648]: Accepted publickey for core from 10.0.0.1 port 33790 ssh2: RSA SHA256:HV7SwMkgpUcGbG5PTBCNGAhaEvexdMAt2yN/TIbGAFk May 13 12:36:09.354000 sshd-session[3648]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:36:09.357834 systemd-logind[1502]: New session 8 of user core. May 13 12:36:09.369037 systemd[1]: Started session-8.scope - Session 8 of User core. May 13 12:36:09.485987 sshd[3650]: Connection closed by 10.0.0.1 port 33790 May 13 12:36:09.486328 sshd-session[3648]: pam_unix(sshd:session): session closed for user core May 13 12:36:09.489938 systemd[1]: sshd@7-10.0.0.39:22-10.0.0.1:33790.service: Deactivated successfully. May 13 12:36:09.491800 systemd[1]: session-8.scope: Deactivated successfully. May 13 12:36:09.493536 systemd-logind[1502]: Session 8 logged out. Waiting for processes to exit. May 13 12:36:09.494759 systemd-logind[1502]: Removed session 8. May 13 12:36:12.861003 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2904825300.mount: Deactivated successfully. May 13 12:36:13.123239 containerd[1521]: time="2025-05-13T12:36:13.123194439Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:36:13.124111 containerd[1521]: time="2025-05-13T12:36:13.123971653Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.3: active requests=0, bytes read=138981893" May 13 12:36:13.124806 containerd[1521]: time="2025-05-13T12:36:13.124769347Z" level=info msg="ImageCreate event name:\"sha256:cdcce3ec4624a24c28cdc07b0ee29ddf6703628edee7452a3f8a8b4816bfd057\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:36:13.126681 containerd[1521]: time="2025-05-13T12:36:13.126535177Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:36:13.127049 containerd[1521]: time="2025-05-13T12:36:13.127018946Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.3\" with image id \"sha256:cdcce3ec4624a24c28cdc07b0ee29ddf6703628edee7452a3f8a8b4816bfd057\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\", size \"138981755\" in 5.375435403s" May 13 12:36:13.127100 containerd[1521]: time="2025-05-13T12:36:13.127048066Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\" returns image reference \"sha256:cdcce3ec4624a24c28cdc07b0ee29ddf6703628edee7452a3f8a8b4816bfd057\"" May 13 12:36:13.145622 containerd[1521]: time="2025-05-13T12:36:13.145566590Z" level=info msg="CreateContainer within sandbox \"7ae2aa0b138978c98df44c05adaffb7f97130d79aac8a0696da4c4a751e31cbb\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" May 13 12:36:13.152916 containerd[1521]: time="2025-05-13T12:36:13.152774156Z" level=info msg="Container 85923af824a4c40b165c5d70019b24c3923f38d0f266fc89000c8034aec0720d: CDI devices from CRI Config.CDIDevices: []" May 13 12:36:13.162061 containerd[1521]: time="2025-05-13T12:36:13.162020998Z" level=info msg="CreateContainer within sandbox \"7ae2aa0b138978c98df44c05adaffb7f97130d79aac8a0696da4c4a751e31cbb\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"85923af824a4c40b165c5d70019b24c3923f38d0f266fc89000c8034aec0720d\"" May 13 12:36:13.163791 containerd[1521]: time="2025-05-13T12:36:13.163763348Z" level=info msg="StartContainer for \"85923af824a4c40b165c5d70019b24c3923f38d0f266fc89000c8034aec0720d\"" May 13 12:36:13.165418 containerd[1521]: time="2025-05-13T12:36:13.165333376Z" level=info msg="connecting to shim 85923af824a4c40b165c5d70019b24c3923f38d0f266fc89000c8034aec0720d" address="unix:///run/containerd/s/edf2aff33a041cc4885815bd91ed4e46ecfcba70a1d2f342e0e41dd610b5576a" protocol=ttrpc version=3 May 13 12:36:13.202078 systemd[1]: Started cri-containerd-85923af824a4c40b165c5d70019b24c3923f38d0f266fc89000c8034aec0720d.scope - libcontainer container 85923af824a4c40b165c5d70019b24c3923f38d0f266fc89000c8034aec0720d. May 13 12:36:13.239579 containerd[1521]: time="2025-05-13T12:36:13.239527914Z" level=info msg="StartContainer for \"85923af824a4c40b165c5d70019b24c3923f38d0f266fc89000c8034aec0720d\" returns successfully" May 13 12:36:13.419214 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. May 13 12:36:13.419372 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. May 13 12:36:13.761313 kubelet[2650]: E0513 12:36:13.761244 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 12:36:14.502567 systemd[1]: Started sshd@8-10.0.0.39:22-10.0.0.1:33396.service - OpenSSH per-connection server daemon (10.0.0.1:33396). May 13 12:36:14.564588 sshd[3732]: Accepted publickey for core from 10.0.0.1 port 33396 ssh2: RSA SHA256:HV7SwMkgpUcGbG5PTBCNGAhaEvexdMAt2yN/TIbGAFk May 13 12:36:14.566067 sshd-session[3732]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:36:14.570627 systemd-logind[1502]: New session 9 of user core. May 13 12:36:14.585169 systemd[1]: Started session-9.scope - Session 9 of User core. May 13 12:36:14.723842 sshd[3734]: Connection closed by 10.0.0.1 port 33396 May 13 12:36:14.724270 sshd-session[3732]: pam_unix(sshd:session): session closed for user core May 13 12:36:14.730237 systemd[1]: sshd@8-10.0.0.39:22-10.0.0.1:33396.service: Deactivated successfully. May 13 12:36:14.733337 systemd[1]: session-9.scope: Deactivated successfully. May 13 12:36:14.737014 systemd-logind[1502]: Session 9 logged out. Waiting for processes to exit. May 13 12:36:14.740271 systemd-logind[1502]: Removed session 9. May 13 12:36:14.763039 kubelet[2650]: I0513 12:36:14.762947 2650 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 13 12:36:14.766012 kubelet[2650]: E0513 12:36:14.763314 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 12:36:15.087470 systemd-networkd[1421]: vxlan.calico: Link UP May 13 12:36:15.087475 systemd-networkd[1421]: vxlan.calico: Gained carrier May 13 12:36:15.975713 kubelet[2650]: I0513 12:36:15.975468 2650 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 13 12:36:15.976171 kubelet[2650]: E0513 12:36:15.975891 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 12:36:16.075483 containerd[1521]: time="2025-05-13T12:36:16.075429948Z" level=info msg="TaskExit event in podsandbox handler container_id:\"85923af824a4c40b165c5d70019b24c3923f38d0f266fc89000c8034aec0720d\" id:\"d0350794fa605bab0a053a6c850d796a77b6512fad2a8fed25f74322dfce96b8\" pid:3967 exit_status:1 exited_at:{seconds:1747139776 nanos:74763377}" May 13 12:36:16.148852 containerd[1521]: time="2025-05-13T12:36:16.148691108Z" level=info msg="TaskExit event in podsandbox handler container_id:\"85923af824a4c40b165c5d70019b24c3923f38d0f266fc89000c8034aec0720d\" id:\"6102347e08722b4112f4757bd2a501550eafa69a78e52f8434472d4700410f12\" pid:3991 exit_status:1 exited_at:{seconds:1747139776 nanos:147942536}" May 13 12:36:16.301210 systemd-networkd[1421]: vxlan.calico: Gained IPv6LL May 13 12:36:19.664766 containerd[1521]: time="2025-05-13T12:36:19.664602281Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-86c67ddb4-6s9k8,Uid:b0c50f51-4a5b-495e-9b69-7356538618c6,Namespace:calico-apiserver,Attempt:0,}" May 13 12:36:19.665465 containerd[1521]: time="2025-05-13T12:36:19.665251370Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-87fb474fb-wjmnx,Uid:69116814-2f5c-4ee2-a520-4ee35d7c81a9,Namespace:calico-system,Attempt:0,}" May 13 12:36:19.745140 systemd[1]: Started sshd@9-10.0.0.39:22-10.0.0.1:33400.service - OpenSSH per-connection server daemon (10.0.0.1:33400). May 13 12:36:19.824471 sshd[4031]: Accepted publickey for core from 10.0.0.1 port 33400 ssh2: RSA SHA256:HV7SwMkgpUcGbG5PTBCNGAhaEvexdMAt2yN/TIbGAFk May 13 12:36:19.834658 sshd-session[4031]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:36:19.842609 systemd-logind[1502]: New session 10 of user core. May 13 12:36:19.851057 systemd[1]: Started session-10.scope - Session 10 of User core. May 13 12:36:19.985430 sshd[4049]: Connection closed by 10.0.0.1 port 33400 May 13 12:36:19.985965 sshd-session[4031]: pam_unix(sshd:session): session closed for user core May 13 12:36:19.993594 systemd[1]: sshd@9-10.0.0.39:22-10.0.0.1:33400.service: Deactivated successfully. May 13 12:36:19.995875 systemd-networkd[1421]: calib4704d907ad: Link UP May 13 12:36:19.996181 systemd-networkd[1421]: calib4704d907ad: Gained carrier May 13 12:36:19.996345 systemd[1]: session-10.scope: Deactivated successfully. May 13 12:36:19.998185 systemd-logind[1502]: Session 10 logged out. Waiting for processes to exit. May 13 12:36:20.002547 systemd[1]: Started sshd@10-10.0.0.39:22-10.0.0.1:33416.service - OpenSSH per-connection server daemon (10.0.0.1:33416). May 13 12:36:20.007583 systemd-logind[1502]: Removed session 10. May 13 12:36:20.013144 containerd[1521]: 2025-05-13 12:36:19.778 [INFO][4005] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--86c67ddb4--6s9k8-eth0 calico-apiserver-86c67ddb4- calico-apiserver b0c50f51-4a5b-495e-9b69-7356538618c6 758 0 2025-05-13 12:35:51 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:86c67ddb4 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-86c67ddb4-6s9k8 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calib4704d907ad [] []}} ContainerID="80f59e36edd591a509f7e0c6423a2a9bac0e5805cd82a0b46d5df2a06375d0dc" Namespace="calico-apiserver" Pod="calico-apiserver-86c67ddb4-6s9k8" WorkloadEndpoint="localhost-k8s-calico--apiserver--86c67ddb4--6s9k8-" May 13 12:36:20.013144 containerd[1521]: 2025-05-13 12:36:19.778 [INFO][4005] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="80f59e36edd591a509f7e0c6423a2a9bac0e5805cd82a0b46d5df2a06375d0dc" Namespace="calico-apiserver" Pod="calico-apiserver-86c67ddb4-6s9k8" WorkloadEndpoint="localhost-k8s-calico--apiserver--86c67ddb4--6s9k8-eth0" May 13 12:36:20.013144 containerd[1521]: 2025-05-13 12:36:19.933 [INFO][4038] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="80f59e36edd591a509f7e0c6423a2a9bac0e5805cd82a0b46d5df2a06375d0dc" HandleID="k8s-pod-network.80f59e36edd591a509f7e0c6423a2a9bac0e5805cd82a0b46d5df2a06375d0dc" Workload="localhost-k8s-calico--apiserver--86c67ddb4--6s9k8-eth0" May 13 12:36:20.013305 containerd[1521]: 2025-05-13 12:36:19.954 [INFO][4038] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="80f59e36edd591a509f7e0c6423a2a9bac0e5805cd82a0b46d5df2a06375d0dc" HandleID="k8s-pod-network.80f59e36edd591a509f7e0c6423a2a9bac0e5805cd82a0b46d5df2a06375d0dc" Workload="localhost-k8s-calico--apiserver--86c67ddb4--6s9k8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000353e60), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-86c67ddb4-6s9k8", "timestamp":"2025-05-13 12:36:19.933241171 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 13 12:36:20.013305 containerd[1521]: 2025-05-13 12:36:19.954 [INFO][4038] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 12:36:20.013305 containerd[1521]: 2025-05-13 12:36:19.955 [INFO][4038] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 12:36:20.013305 containerd[1521]: 2025-05-13 12:36:19.955 [INFO][4038] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 13 12:36:20.013305 containerd[1521]: 2025-05-13 12:36:19.957 [INFO][4038] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.80f59e36edd591a509f7e0c6423a2a9bac0e5805cd82a0b46d5df2a06375d0dc" host="localhost" May 13 12:36:20.013305 containerd[1521]: 2025-05-13 12:36:19.965 [INFO][4038] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 13 12:36:20.013305 containerd[1521]: 2025-05-13 12:36:19.974 [INFO][4038] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 13 12:36:20.013305 containerd[1521]: 2025-05-13 12:36:19.975 [INFO][4038] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 13 12:36:20.013305 containerd[1521]: 2025-05-13 12:36:19.977 [INFO][4038] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 13 12:36:20.013305 containerd[1521]: 2025-05-13 12:36:19.977 [INFO][4038] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.80f59e36edd591a509f7e0c6423a2a9bac0e5805cd82a0b46d5df2a06375d0dc" host="localhost" May 13 12:36:20.013509 containerd[1521]: 2025-05-13 12:36:19.979 [INFO][4038] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.80f59e36edd591a509f7e0c6423a2a9bac0e5805cd82a0b46d5df2a06375d0dc May 13 12:36:20.013509 containerd[1521]: 2025-05-13 12:36:19.984 [INFO][4038] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.80f59e36edd591a509f7e0c6423a2a9bac0e5805cd82a0b46d5df2a06375d0dc" host="localhost" May 13 12:36:20.013509 containerd[1521]: 2025-05-13 12:36:19.989 [INFO][4038] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.80f59e36edd591a509f7e0c6423a2a9bac0e5805cd82a0b46d5df2a06375d0dc" host="localhost" May 13 12:36:20.013509 containerd[1521]: 2025-05-13 12:36:19.989 [INFO][4038] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.80f59e36edd591a509f7e0c6423a2a9bac0e5805cd82a0b46d5df2a06375d0dc" host="localhost" May 13 12:36:20.013509 containerd[1521]: 2025-05-13 12:36:19.989 [INFO][4038] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 12:36:20.013509 containerd[1521]: 2025-05-13 12:36:19.989 [INFO][4038] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="80f59e36edd591a509f7e0c6423a2a9bac0e5805cd82a0b46d5df2a06375d0dc" HandleID="k8s-pod-network.80f59e36edd591a509f7e0c6423a2a9bac0e5805cd82a0b46d5df2a06375d0dc" Workload="localhost-k8s-calico--apiserver--86c67ddb4--6s9k8-eth0" May 13 12:36:20.013617 containerd[1521]: 2025-05-13 12:36:19.992 [INFO][4005] cni-plugin/k8s.go 386: Populated endpoint ContainerID="80f59e36edd591a509f7e0c6423a2a9bac0e5805cd82a0b46d5df2a06375d0dc" Namespace="calico-apiserver" Pod="calico-apiserver-86c67ddb4-6s9k8" WorkloadEndpoint="localhost-k8s-calico--apiserver--86c67ddb4--6s9k8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--86c67ddb4--6s9k8-eth0", GenerateName:"calico-apiserver-86c67ddb4-", Namespace:"calico-apiserver", SelfLink:"", UID:"b0c50f51-4a5b-495e-9b69-7356538618c6", ResourceVersion:"758", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 12, 35, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"86c67ddb4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-86c67ddb4-6s9k8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib4704d907ad", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 12:36:20.013665 containerd[1521]: 2025-05-13 12:36:19.992 [INFO][4005] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="80f59e36edd591a509f7e0c6423a2a9bac0e5805cd82a0b46d5df2a06375d0dc" Namespace="calico-apiserver" Pod="calico-apiserver-86c67ddb4-6s9k8" WorkloadEndpoint="localhost-k8s-calico--apiserver--86c67ddb4--6s9k8-eth0" May 13 12:36:20.013665 containerd[1521]: 2025-05-13 12:36:19.992 [INFO][4005] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib4704d907ad ContainerID="80f59e36edd591a509f7e0c6423a2a9bac0e5805cd82a0b46d5df2a06375d0dc" Namespace="calico-apiserver" Pod="calico-apiserver-86c67ddb4-6s9k8" WorkloadEndpoint="localhost-k8s-calico--apiserver--86c67ddb4--6s9k8-eth0" May 13 12:36:20.013665 containerd[1521]: 2025-05-13 12:36:19.995 [INFO][4005] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="80f59e36edd591a509f7e0c6423a2a9bac0e5805cd82a0b46d5df2a06375d0dc" Namespace="calico-apiserver" Pod="calico-apiserver-86c67ddb4-6s9k8" WorkloadEndpoint="localhost-k8s-calico--apiserver--86c67ddb4--6s9k8-eth0" May 13 12:36:20.013721 containerd[1521]: 2025-05-13 12:36:19.996 [INFO][4005] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="80f59e36edd591a509f7e0c6423a2a9bac0e5805cd82a0b46d5df2a06375d0dc" Namespace="calico-apiserver" Pod="calico-apiserver-86c67ddb4-6s9k8" WorkloadEndpoint="localhost-k8s-calico--apiserver--86c67ddb4--6s9k8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--86c67ddb4--6s9k8-eth0", GenerateName:"calico-apiserver-86c67ddb4-", Namespace:"calico-apiserver", SelfLink:"", UID:"b0c50f51-4a5b-495e-9b69-7356538618c6", ResourceVersion:"758", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 12, 35, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"86c67ddb4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"80f59e36edd591a509f7e0c6423a2a9bac0e5805cd82a0b46d5df2a06375d0dc", Pod:"calico-apiserver-86c67ddb4-6s9k8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib4704d907ad", MAC:"ce:50:30:ab:01:dd", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 12:36:20.013766 containerd[1521]: 2025-05-13 12:36:20.004 [INFO][4005] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="80f59e36edd591a509f7e0c6423a2a9bac0e5805cd82a0b46d5df2a06375d0dc" Namespace="calico-apiserver" Pod="calico-apiserver-86c67ddb4-6s9k8" WorkloadEndpoint="localhost-k8s-calico--apiserver--86c67ddb4--6s9k8-eth0" May 13 12:36:20.023826 kubelet[2650]: I0513 12:36:20.023748 2650 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-7db44" podStartSLOduration=8.324805241 podStartE2EDuration="28.023731672s" podCreationTimestamp="2025-05-13 12:35:52 +0000 UTC" firstStartedPulling="2025-05-13 12:35:53.428676245 +0000 UTC m=+13.863254217" lastFinishedPulling="2025-05-13 12:36:13.127602636 +0000 UTC m=+33.562180648" observedRunningTime="2025-05-13 12:36:13.775815893 +0000 UTC m=+34.210393945" watchObservedRunningTime="2025-05-13 12:36:20.023731672 +0000 UTC m=+40.458309684" May 13 12:36:20.055505 sshd[4068]: Accepted publickey for core from 10.0.0.1 port 33416 ssh2: RSA SHA256:HV7SwMkgpUcGbG5PTBCNGAhaEvexdMAt2yN/TIbGAFk May 13 12:36:20.056588 sshd-session[4068]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:36:20.062483 systemd-logind[1502]: New session 11 of user core. May 13 12:36:20.068053 systemd[1]: Started session-11.scope - Session 11 of User core. May 13 12:36:20.095926 containerd[1521]: time="2025-05-13T12:36:20.095063756Z" level=info msg="connecting to shim 80f59e36edd591a509f7e0c6423a2a9bac0e5805cd82a0b46d5df2a06375d0dc" address="unix:///run/containerd/s/55a33a33325e403db6a883dea562032cd91b52137efdfcd7beb2c0de0f2e67e6" namespace=k8s.io protocol=ttrpc version=3 May 13 12:36:20.113597 systemd-networkd[1421]: cali5d98ed21614: Link UP May 13 12:36:20.117767 systemd-networkd[1421]: cali5d98ed21614: Gained carrier May 13 12:36:20.145817 containerd[1521]: 2025-05-13 12:36:19.779 [INFO][4017] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--87fb474fb--wjmnx-eth0 calico-kube-controllers-87fb474fb- calico-system 69116814-2f5c-4ee2-a520-4ee35d7c81a9 755 0 2025-05-13 12:35:53 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:87fb474fb projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-87fb474fb-wjmnx eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali5d98ed21614 [] []}} ContainerID="d7219a49cff6a678bc354b3cc7172d590cc28e93d4509544b51196bbeef66bd3" Namespace="calico-system" Pod="calico-kube-controllers-87fb474fb-wjmnx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--87fb474fb--wjmnx-" May 13 12:36:20.145817 containerd[1521]: 2025-05-13 12:36:19.779 [INFO][4017] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="d7219a49cff6a678bc354b3cc7172d590cc28e93d4509544b51196bbeef66bd3" Namespace="calico-system" Pod="calico-kube-controllers-87fb474fb-wjmnx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--87fb474fb--wjmnx-eth0" May 13 12:36:20.145817 containerd[1521]: 2025-05-13 12:36:19.933 [INFO][4036] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d7219a49cff6a678bc354b3cc7172d590cc28e93d4509544b51196bbeef66bd3" HandleID="k8s-pod-network.d7219a49cff6a678bc354b3cc7172d590cc28e93d4509544b51196bbeef66bd3" Workload="localhost-k8s-calico--kube--controllers--87fb474fb--wjmnx-eth0" May 13 12:36:20.146025 containerd[1521]: 2025-05-13 12:36:19.954 [INFO][4036] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d7219a49cff6a678bc354b3cc7172d590cc28e93d4509544b51196bbeef66bd3" HandleID="k8s-pod-network.d7219a49cff6a678bc354b3cc7172d590cc28e93d4509544b51196bbeef66bd3" Workload="localhost-k8s-calico--kube--controllers--87fb474fb--wjmnx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001abea0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-87fb474fb-wjmnx", "timestamp":"2025-05-13 12:36:19.933239811 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 13 12:36:20.146025 containerd[1521]: 2025-05-13 12:36:19.954 [INFO][4036] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 12:36:20.146025 containerd[1521]: 2025-05-13 12:36:19.989 [INFO][4036] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 12:36:20.146025 containerd[1521]: 2025-05-13 12:36:19.990 [INFO][4036] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 13 12:36:20.146025 containerd[1521]: 2025-05-13 12:36:20.058 [INFO][4036] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.d7219a49cff6a678bc354b3cc7172d590cc28e93d4509544b51196bbeef66bd3" host="localhost" May 13 12:36:20.146025 containerd[1521]: 2025-05-13 12:36:20.063 [INFO][4036] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 13 12:36:20.146025 containerd[1521]: 2025-05-13 12:36:20.071 [INFO][4036] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 13 12:36:20.146025 containerd[1521]: 2025-05-13 12:36:20.075 [INFO][4036] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 13 12:36:20.146025 containerd[1521]: 2025-05-13 12:36:20.077 [INFO][4036] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 13 12:36:20.146025 containerd[1521]: 2025-05-13 12:36:20.077 [INFO][4036] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d7219a49cff6a678bc354b3cc7172d590cc28e93d4509544b51196bbeef66bd3" host="localhost" May 13 12:36:20.146216 containerd[1521]: 2025-05-13 12:36:20.081 [INFO][4036] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.d7219a49cff6a678bc354b3cc7172d590cc28e93d4509544b51196bbeef66bd3 May 13 12:36:20.146216 containerd[1521]: 2025-05-13 12:36:20.084 [INFO][4036] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d7219a49cff6a678bc354b3cc7172d590cc28e93d4509544b51196bbeef66bd3" host="localhost" May 13 12:36:20.146216 containerd[1521]: 2025-05-13 12:36:20.094 [INFO][4036] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.d7219a49cff6a678bc354b3cc7172d590cc28e93d4509544b51196bbeef66bd3" host="localhost" May 13 12:36:20.146216 containerd[1521]: 2025-05-13 12:36:20.094 [INFO][4036] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.d7219a49cff6a678bc354b3cc7172d590cc28e93d4509544b51196bbeef66bd3" host="localhost" May 13 12:36:20.146216 containerd[1521]: 2025-05-13 12:36:20.094 [INFO][4036] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 12:36:20.146216 containerd[1521]: 2025-05-13 12:36:20.094 [INFO][4036] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="d7219a49cff6a678bc354b3cc7172d590cc28e93d4509544b51196bbeef66bd3" HandleID="k8s-pod-network.d7219a49cff6a678bc354b3cc7172d590cc28e93d4509544b51196bbeef66bd3" Workload="localhost-k8s-calico--kube--controllers--87fb474fb--wjmnx-eth0" May 13 12:36:20.146364 containerd[1521]: 2025-05-13 12:36:20.107 [INFO][4017] cni-plugin/k8s.go 386: Populated endpoint ContainerID="d7219a49cff6a678bc354b3cc7172d590cc28e93d4509544b51196bbeef66bd3" Namespace="calico-system" Pod="calico-kube-controllers-87fb474fb-wjmnx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--87fb474fb--wjmnx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--87fb474fb--wjmnx-eth0", GenerateName:"calico-kube-controllers-87fb474fb-", Namespace:"calico-system", SelfLink:"", UID:"69116814-2f5c-4ee2-a520-4ee35d7c81a9", ResourceVersion:"755", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 12, 35, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"87fb474fb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-87fb474fb-wjmnx", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali5d98ed21614", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 12:36:20.146564 containerd[1521]: 2025-05-13 12:36:20.107 [INFO][4017] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="d7219a49cff6a678bc354b3cc7172d590cc28e93d4509544b51196bbeef66bd3" Namespace="calico-system" Pod="calico-kube-controllers-87fb474fb-wjmnx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--87fb474fb--wjmnx-eth0" May 13 12:36:20.146564 containerd[1521]: 2025-05-13 12:36:20.107 [INFO][4017] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5d98ed21614 ContainerID="d7219a49cff6a678bc354b3cc7172d590cc28e93d4509544b51196bbeef66bd3" Namespace="calico-system" Pod="calico-kube-controllers-87fb474fb-wjmnx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--87fb474fb--wjmnx-eth0" May 13 12:36:20.146564 containerd[1521]: 2025-05-13 12:36:20.115 [INFO][4017] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d7219a49cff6a678bc354b3cc7172d590cc28e93d4509544b51196bbeef66bd3" Namespace="calico-system" Pod="calico-kube-controllers-87fb474fb-wjmnx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--87fb474fb--wjmnx-eth0" May 13 12:36:20.146643 containerd[1521]: 2025-05-13 12:36:20.115 [INFO][4017] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="d7219a49cff6a678bc354b3cc7172d590cc28e93d4509544b51196bbeef66bd3" Namespace="calico-system" Pod="calico-kube-controllers-87fb474fb-wjmnx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--87fb474fb--wjmnx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--87fb474fb--wjmnx-eth0", GenerateName:"calico-kube-controllers-87fb474fb-", Namespace:"calico-system", SelfLink:"", UID:"69116814-2f5c-4ee2-a520-4ee35d7c81a9", ResourceVersion:"755", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 12, 35, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"87fb474fb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d7219a49cff6a678bc354b3cc7172d590cc28e93d4509544b51196bbeef66bd3", Pod:"calico-kube-controllers-87fb474fb-wjmnx", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali5d98ed21614", MAC:"ba:a3:09:52:3d:79", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 12:36:20.146699 containerd[1521]: 2025-05-13 12:36:20.129 [INFO][4017] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="d7219a49cff6a678bc354b3cc7172d590cc28e93d4509544b51196bbeef66bd3" Namespace="calico-system" Pod="calico-kube-controllers-87fb474fb-wjmnx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--87fb474fb--wjmnx-eth0" May 13 12:36:20.177682 containerd[1521]: time="2025-05-13T12:36:20.177635279Z" level=info msg="connecting to shim d7219a49cff6a678bc354b3cc7172d590cc28e93d4509544b51196bbeef66bd3" address="unix:///run/containerd/s/09eb07d0103dc1d8dae5e8f38449bca137f1929cc1684552ad91d890f459b152" namespace=k8s.io protocol=ttrpc version=3 May 13 12:36:20.182376 systemd[1]: Started cri-containerd-80f59e36edd591a509f7e0c6423a2a9bac0e5805cd82a0b46d5df2a06375d0dc.scope - libcontainer container 80f59e36edd591a509f7e0c6423a2a9bac0e5805cd82a0b46d5df2a06375d0dc. May 13 12:36:20.198872 systemd-resolved[1352]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 13 12:36:20.204119 systemd[1]: Started cri-containerd-d7219a49cff6a678bc354b3cc7172d590cc28e93d4509544b51196bbeef66bd3.scope - libcontainer container d7219a49cff6a678bc354b3cc7172d590cc28e93d4509544b51196bbeef66bd3. May 13 12:36:20.224551 systemd-resolved[1352]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 13 12:36:20.233951 containerd[1521]: time="2025-05-13T12:36:20.233883672Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-86c67ddb4-6s9k8,Uid:b0c50f51-4a5b-495e-9b69-7356538618c6,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"80f59e36edd591a509f7e0c6423a2a9bac0e5805cd82a0b46d5df2a06375d0dc\"" May 13 12:36:20.241002 containerd[1521]: time="2025-05-13T12:36:20.240733048Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" May 13 12:36:20.260978 containerd[1521]: time="2025-05-13T12:36:20.260939173Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-87fb474fb-wjmnx,Uid:69116814-2f5c-4ee2-a520-4ee35d7c81a9,Namespace:calico-system,Attempt:0,} returns sandbox id \"d7219a49cff6a678bc354b3cc7172d590cc28e93d4509544b51196bbeef66bd3\"" May 13 12:36:20.274932 sshd[4086]: Connection closed by 10.0.0.1 port 33416 May 13 12:36:20.274541 sshd-session[4068]: pam_unix(sshd:session): session closed for user core May 13 12:36:20.285705 systemd[1]: sshd@10-10.0.0.39:22-10.0.0.1:33416.service: Deactivated successfully. May 13 12:36:20.287578 systemd[1]: session-11.scope: Deactivated successfully. May 13 12:36:20.290449 systemd-logind[1502]: Session 11 logged out. Waiting for processes to exit. May 13 12:36:20.294766 systemd[1]: Started sshd@11-10.0.0.39:22-10.0.0.1:33428.service - OpenSSH per-connection server daemon (10.0.0.1:33428). May 13 12:36:20.295788 systemd-logind[1502]: Removed session 11. May 13 12:36:20.347751 sshd[4203]: Accepted publickey for core from 10.0.0.1 port 33428 ssh2: RSA SHA256:HV7SwMkgpUcGbG5PTBCNGAhaEvexdMAt2yN/TIbGAFk May 13 12:36:20.348974 sshd-session[4203]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:36:20.353258 systemd-logind[1502]: New session 12 of user core. May 13 12:36:20.364049 systemd[1]: Started session-12.scope - Session 12 of User core. May 13 12:36:20.480259 sshd[4207]: Connection closed by 10.0.0.1 port 33428 May 13 12:36:20.480609 sshd-session[4203]: pam_unix(sshd:session): session closed for user core May 13 12:36:20.484299 systemd[1]: sshd@11-10.0.0.39:22-10.0.0.1:33428.service: Deactivated successfully. May 13 12:36:20.487550 systemd[1]: session-12.scope: Deactivated successfully. May 13 12:36:20.488352 systemd-logind[1502]: Session 12 logged out. Waiting for processes to exit. May 13 12:36:20.489445 systemd-logind[1502]: Removed session 12. May 13 12:36:20.661273 kubelet[2650]: E0513 12:36:20.661114 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 12:36:20.661814 containerd[1521]: time="2025-05-13T12:36:20.661767817Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-x5r8q,Uid:22b2f084-94f7-495c-ac36-2ef6e79aa0ee,Namespace:kube-system,Attempt:0,}" May 13 12:36:20.661945 containerd[1521]: time="2025-05-13T12:36:20.661767977Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7kdvf,Uid:a13363c5-db73-4c15-bc44-8be9849ef5ce,Namespace:calico-system,Attempt:0,}" May 13 12:36:20.774926 systemd-networkd[1421]: cali36f8f866d82: Link UP May 13 12:36:20.775659 systemd-networkd[1421]: cali36f8f866d82: Gained carrier May 13 12:36:20.793740 containerd[1521]: 2025-05-13 12:36:20.701 [INFO][4219] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--x5r8q-eth0 coredns-668d6bf9bc- kube-system 22b2f084-94f7-495c-ac36-2ef6e79aa0ee 757 0 2025-05-13 12:35:45 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-x5r8q eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali36f8f866d82 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="78c4406ca29951f0a436f15f4ff3c8b0e795dcd1c7759f93836e0ec9544d7ba6" Namespace="kube-system" Pod="coredns-668d6bf9bc-x5r8q" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--x5r8q-" May 13 12:36:20.793740 containerd[1521]: 2025-05-13 12:36:20.701 [INFO][4219] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="78c4406ca29951f0a436f15f4ff3c8b0e795dcd1c7759f93836e0ec9544d7ba6" Namespace="kube-system" Pod="coredns-668d6bf9bc-x5r8q" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--x5r8q-eth0" May 13 12:36:20.793740 containerd[1521]: 2025-05-13 12:36:20.733 [INFO][4247] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="78c4406ca29951f0a436f15f4ff3c8b0e795dcd1c7759f93836e0ec9544d7ba6" HandleID="k8s-pod-network.78c4406ca29951f0a436f15f4ff3c8b0e795dcd1c7759f93836e0ec9544d7ba6" Workload="localhost-k8s-coredns--668d6bf9bc--x5r8q-eth0" May 13 12:36:20.794390 containerd[1521]: 2025-05-13 12:36:20.745 [INFO][4247] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="78c4406ca29951f0a436f15f4ff3c8b0e795dcd1c7759f93836e0ec9544d7ba6" HandleID="k8s-pod-network.78c4406ca29951f0a436f15f4ff3c8b0e795dcd1c7759f93836e0ec9544d7ba6" Workload="localhost-k8s-coredns--668d6bf9bc--x5r8q-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003c40c0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-x5r8q", "timestamp":"2025-05-13 12:36:20.733828672 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 13 12:36:20.794390 containerd[1521]: 2025-05-13 12:36:20.745 [INFO][4247] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 12:36:20.794390 containerd[1521]: 2025-05-13 12:36:20.745 [INFO][4247] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 12:36:20.794390 containerd[1521]: 2025-05-13 12:36:20.745 [INFO][4247] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 13 12:36:20.794390 containerd[1521]: 2025-05-13 12:36:20.748 [INFO][4247] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.78c4406ca29951f0a436f15f4ff3c8b0e795dcd1c7759f93836e0ec9544d7ba6" host="localhost" May 13 12:36:20.794390 containerd[1521]: 2025-05-13 12:36:20.751 [INFO][4247] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 13 12:36:20.794390 containerd[1521]: 2025-05-13 12:36:20.755 [INFO][4247] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 13 12:36:20.794390 containerd[1521]: 2025-05-13 12:36:20.757 [INFO][4247] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 13 12:36:20.794390 containerd[1521]: 2025-05-13 12:36:20.758 [INFO][4247] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 13 12:36:20.794390 containerd[1521]: 2025-05-13 12:36:20.759 [INFO][4247] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.78c4406ca29951f0a436f15f4ff3c8b0e795dcd1c7759f93836e0ec9544d7ba6" host="localhost" May 13 12:36:20.794605 containerd[1521]: 2025-05-13 12:36:20.760 [INFO][4247] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.78c4406ca29951f0a436f15f4ff3c8b0e795dcd1c7759f93836e0ec9544d7ba6 May 13 12:36:20.794605 containerd[1521]: 2025-05-13 12:36:20.763 [INFO][4247] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.78c4406ca29951f0a436f15f4ff3c8b0e795dcd1c7759f93836e0ec9544d7ba6" host="localhost" May 13 12:36:20.794605 containerd[1521]: 2025-05-13 12:36:20.768 [INFO][4247] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.78c4406ca29951f0a436f15f4ff3c8b0e795dcd1c7759f93836e0ec9544d7ba6" host="localhost" May 13 12:36:20.794605 containerd[1521]: 2025-05-13 12:36:20.768 [INFO][4247] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.78c4406ca29951f0a436f15f4ff3c8b0e795dcd1c7759f93836e0ec9544d7ba6" host="localhost" May 13 12:36:20.794605 containerd[1521]: 2025-05-13 12:36:20.768 [INFO][4247] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 12:36:20.794605 containerd[1521]: 2025-05-13 12:36:20.768 [INFO][4247] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="78c4406ca29951f0a436f15f4ff3c8b0e795dcd1c7759f93836e0ec9544d7ba6" HandleID="k8s-pod-network.78c4406ca29951f0a436f15f4ff3c8b0e795dcd1c7759f93836e0ec9544d7ba6" Workload="localhost-k8s-coredns--668d6bf9bc--x5r8q-eth0" May 13 12:36:20.794726 containerd[1521]: 2025-05-13 12:36:20.771 [INFO][4219] cni-plugin/k8s.go 386: Populated endpoint ContainerID="78c4406ca29951f0a436f15f4ff3c8b0e795dcd1c7759f93836e0ec9544d7ba6" Namespace="kube-system" Pod="coredns-668d6bf9bc-x5r8q" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--x5r8q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--x5r8q-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"22b2f084-94f7-495c-ac36-2ef6e79aa0ee", ResourceVersion:"757", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 12, 35, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-x5r8q", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali36f8f866d82", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 12:36:20.794778 containerd[1521]: 2025-05-13 12:36:20.771 [INFO][4219] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="78c4406ca29951f0a436f15f4ff3c8b0e795dcd1c7759f93836e0ec9544d7ba6" Namespace="kube-system" Pod="coredns-668d6bf9bc-x5r8q" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--x5r8q-eth0" May 13 12:36:20.794778 containerd[1521]: 2025-05-13 12:36:20.771 [INFO][4219] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali36f8f866d82 ContainerID="78c4406ca29951f0a436f15f4ff3c8b0e795dcd1c7759f93836e0ec9544d7ba6" Namespace="kube-system" Pod="coredns-668d6bf9bc-x5r8q" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--x5r8q-eth0" May 13 12:36:20.794778 containerd[1521]: 2025-05-13 12:36:20.776 [INFO][4219] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="78c4406ca29951f0a436f15f4ff3c8b0e795dcd1c7759f93836e0ec9544d7ba6" Namespace="kube-system" Pod="coredns-668d6bf9bc-x5r8q" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--x5r8q-eth0" May 13 12:36:20.794843 containerd[1521]: 2025-05-13 12:36:20.776 [INFO][4219] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="78c4406ca29951f0a436f15f4ff3c8b0e795dcd1c7759f93836e0ec9544d7ba6" Namespace="kube-system" Pod="coredns-668d6bf9bc-x5r8q" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--x5r8q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--x5r8q-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"22b2f084-94f7-495c-ac36-2ef6e79aa0ee", ResourceVersion:"757", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 12, 35, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"78c4406ca29951f0a436f15f4ff3c8b0e795dcd1c7759f93836e0ec9544d7ba6", Pod:"coredns-668d6bf9bc-x5r8q", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali36f8f866d82", MAC:"b6:37:04:d5:11:7a", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 12:36:20.794843 containerd[1521]: 2025-05-13 12:36:20.786 [INFO][4219] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="78c4406ca29951f0a436f15f4ff3c8b0e795dcd1c7759f93836e0ec9544d7ba6" Namespace="kube-system" Pod="coredns-668d6bf9bc-x5r8q" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--x5r8q-eth0" May 13 12:36:20.825783 containerd[1521]: time="2025-05-13T12:36:20.825747527Z" level=info msg="connecting to shim 78c4406ca29951f0a436f15f4ff3c8b0e795dcd1c7759f93836e0ec9544d7ba6" address="unix:///run/containerd/s/814ecb19fdf74aeb798eabfa7709322e65e2bd84e63034c7848f04effc0b4778" namespace=k8s.io protocol=ttrpc version=3 May 13 12:36:20.855062 systemd[1]: Started cri-containerd-78c4406ca29951f0a436f15f4ff3c8b0e795dcd1c7759f93836e0ec9544d7ba6.scope - libcontainer container 78c4406ca29951f0a436f15f4ff3c8b0e795dcd1c7759f93836e0ec9544d7ba6. May 13 12:36:20.868056 systemd-resolved[1352]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 13 12:36:20.878525 systemd-networkd[1421]: cali6290af90427: Link UP May 13 12:36:20.878824 systemd-networkd[1421]: cali6290af90427: Gained carrier May 13 12:36:20.893524 containerd[1521]: 2025-05-13 12:36:20.707 [INFO][4228] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--7kdvf-eth0 csi-node-driver- calico-system a13363c5-db73-4c15-bc44-8be9849ef5ce 650 0 2025-05-13 12:35:53 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:5b5cc68cd5 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-7kdvf eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali6290af90427 [] []}} ContainerID="f18c530d23537357d9db4cd72139409ed3862ed26735dc11783f9f6349cc7b9a" Namespace="calico-system" Pod="csi-node-driver-7kdvf" WorkloadEndpoint="localhost-k8s-csi--node--driver--7kdvf-" May 13 12:36:20.893524 containerd[1521]: 2025-05-13 12:36:20.707 [INFO][4228] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="f18c530d23537357d9db4cd72139409ed3862ed26735dc11783f9f6349cc7b9a" Namespace="calico-system" Pod="csi-node-driver-7kdvf" WorkloadEndpoint="localhost-k8s-csi--node--driver--7kdvf-eth0" May 13 12:36:20.893524 containerd[1521]: 2025-05-13 12:36:20.737 [INFO][4254] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f18c530d23537357d9db4cd72139409ed3862ed26735dc11783f9f6349cc7b9a" HandleID="k8s-pod-network.f18c530d23537357d9db4cd72139409ed3862ed26735dc11783f9f6349cc7b9a" Workload="localhost-k8s-csi--node--driver--7kdvf-eth0" May 13 12:36:20.893524 containerd[1521]: 2025-05-13 12:36:20.751 [INFO][4254] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f18c530d23537357d9db4cd72139409ed3862ed26735dc11783f9f6349cc7b9a" HandleID="k8s-pod-network.f18c530d23537357d9db4cd72139409ed3862ed26735dc11783f9f6349cc7b9a" Workload="localhost-k8s-csi--node--driver--7kdvf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002f4e00), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-7kdvf", "timestamp":"2025-05-13 12:36:20.737334442 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 13 12:36:20.893524 containerd[1521]: 2025-05-13 12:36:20.751 [INFO][4254] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 12:36:20.893524 containerd[1521]: 2025-05-13 12:36:20.769 [INFO][4254] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 12:36:20.893524 containerd[1521]: 2025-05-13 12:36:20.769 [INFO][4254] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 13 12:36:20.893524 containerd[1521]: 2025-05-13 12:36:20.848 [INFO][4254] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.f18c530d23537357d9db4cd72139409ed3862ed26735dc11783f9f6349cc7b9a" host="localhost" May 13 12:36:20.893524 containerd[1521]: 2025-05-13 12:36:20.852 [INFO][4254] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 13 12:36:20.893524 containerd[1521]: 2025-05-13 12:36:20.857 [INFO][4254] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 13 12:36:20.893524 containerd[1521]: 2025-05-13 12:36:20.859 [INFO][4254] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 13 12:36:20.893524 containerd[1521]: 2025-05-13 12:36:20.863 [INFO][4254] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 13 12:36:20.893524 containerd[1521]: 2025-05-13 12:36:20.863 [INFO][4254] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f18c530d23537357d9db4cd72139409ed3862ed26735dc11783f9f6349cc7b9a" host="localhost" May 13 12:36:20.893524 containerd[1521]: 2025-05-13 12:36:20.864 [INFO][4254] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.f18c530d23537357d9db4cd72139409ed3862ed26735dc11783f9f6349cc7b9a May 13 12:36:20.893524 containerd[1521]: 2025-05-13 12:36:20.868 [INFO][4254] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f18c530d23537357d9db4cd72139409ed3862ed26735dc11783f9f6349cc7b9a" host="localhost" May 13 12:36:20.893524 containerd[1521]: 2025-05-13 12:36:20.873 [INFO][4254] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.f18c530d23537357d9db4cd72139409ed3862ed26735dc11783f9f6349cc7b9a" host="localhost" May 13 12:36:20.893524 containerd[1521]: 2025-05-13 12:36:20.874 [INFO][4254] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.f18c530d23537357d9db4cd72139409ed3862ed26735dc11783f9f6349cc7b9a" host="localhost" May 13 12:36:20.893524 containerd[1521]: 2025-05-13 12:36:20.874 [INFO][4254] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 12:36:20.893524 containerd[1521]: 2025-05-13 12:36:20.874 [INFO][4254] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="f18c530d23537357d9db4cd72139409ed3862ed26735dc11783f9f6349cc7b9a" HandleID="k8s-pod-network.f18c530d23537357d9db4cd72139409ed3862ed26735dc11783f9f6349cc7b9a" Workload="localhost-k8s-csi--node--driver--7kdvf-eth0" May 13 12:36:20.894130 containerd[1521]: 2025-05-13 12:36:20.876 [INFO][4228] cni-plugin/k8s.go 386: Populated endpoint ContainerID="f18c530d23537357d9db4cd72139409ed3862ed26735dc11783f9f6349cc7b9a" Namespace="calico-system" Pod="csi-node-driver-7kdvf" WorkloadEndpoint="localhost-k8s-csi--node--driver--7kdvf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--7kdvf-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a13363c5-db73-4c15-bc44-8be9849ef5ce", ResourceVersion:"650", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 12, 35, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5b5cc68cd5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-7kdvf", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali6290af90427", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 12:36:20.894130 containerd[1521]: 2025-05-13 12:36:20.876 [INFO][4228] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="f18c530d23537357d9db4cd72139409ed3862ed26735dc11783f9f6349cc7b9a" Namespace="calico-system" Pod="csi-node-driver-7kdvf" WorkloadEndpoint="localhost-k8s-csi--node--driver--7kdvf-eth0" May 13 12:36:20.894130 containerd[1521]: 2025-05-13 12:36:20.876 [INFO][4228] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6290af90427 ContainerID="f18c530d23537357d9db4cd72139409ed3862ed26735dc11783f9f6349cc7b9a" Namespace="calico-system" Pod="csi-node-driver-7kdvf" WorkloadEndpoint="localhost-k8s-csi--node--driver--7kdvf-eth0" May 13 12:36:20.894130 containerd[1521]: 2025-05-13 12:36:20.877 [INFO][4228] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f18c530d23537357d9db4cd72139409ed3862ed26735dc11783f9f6349cc7b9a" Namespace="calico-system" Pod="csi-node-driver-7kdvf" WorkloadEndpoint="localhost-k8s-csi--node--driver--7kdvf-eth0" May 13 12:36:20.894130 containerd[1521]: 2025-05-13 12:36:20.878 [INFO][4228] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="f18c530d23537357d9db4cd72139409ed3862ed26735dc11783f9f6349cc7b9a" Namespace="calico-system" Pod="csi-node-driver-7kdvf" WorkloadEndpoint="localhost-k8s-csi--node--driver--7kdvf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--7kdvf-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a13363c5-db73-4c15-bc44-8be9849ef5ce", ResourceVersion:"650", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 12, 35, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5b5cc68cd5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f18c530d23537357d9db4cd72139409ed3862ed26735dc11783f9f6349cc7b9a", Pod:"csi-node-driver-7kdvf", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali6290af90427", MAC:"a2:8b:c1:dd:b8:1d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 12:36:20.894130 containerd[1521]: 2025-05-13 12:36:20.889 [INFO][4228] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="f18c530d23537357d9db4cd72139409ed3862ed26735dc11783f9f6349cc7b9a" Namespace="calico-system" Pod="csi-node-driver-7kdvf" WorkloadEndpoint="localhost-k8s-csi--node--driver--7kdvf-eth0" May 13 12:36:20.907754 containerd[1521]: time="2025-05-13T12:36:20.907721161Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-x5r8q,Uid:22b2f084-94f7-495c-ac36-2ef6e79aa0ee,Namespace:kube-system,Attempt:0,} returns sandbox id \"78c4406ca29951f0a436f15f4ff3c8b0e795dcd1c7759f93836e0ec9544d7ba6\"" May 13 12:36:20.908791 kubelet[2650]: E0513 12:36:20.908763 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 12:36:20.921739 containerd[1521]: time="2025-05-13T12:36:20.921649637Z" level=info msg="CreateContainer within sandbox \"78c4406ca29951f0a436f15f4ff3c8b0e795dcd1c7759f93836e0ec9544d7ba6\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 13 12:36:20.926093 containerd[1521]: time="2025-05-13T12:36:20.926061100Z" level=info msg="connecting to shim f18c530d23537357d9db4cd72139409ed3862ed26735dc11783f9f6349cc7b9a" address="unix:///run/containerd/s/6da9093c197202b473bc9a3fc33084d8ebf61f337bb1e3d68936fba924a5d51f" namespace=k8s.io protocol=ttrpc version=3 May 13 12:36:20.939440 containerd[1521]: time="2025-05-13T12:36:20.939408088Z" level=info msg="Container 8c8f3504068665e194b5f099283e467a16e11ab30c7f7417221d9edb3f036e7b: CDI devices from CRI Config.CDIDevices: []" May 13 12:36:20.944622 containerd[1521]: time="2025-05-13T12:36:20.944589120Z" level=info msg="CreateContainer within sandbox \"78c4406ca29951f0a436f15f4ff3c8b0e795dcd1c7759f93836e0ec9544d7ba6\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8c8f3504068665e194b5f099283e467a16e11ab30c7f7417221d9edb3f036e7b\"" May 13 12:36:20.954066 systemd[1]: Started cri-containerd-f18c530d23537357d9db4cd72139409ed3862ed26735dc11783f9f6349cc7b9a.scope - libcontainer container f18c530d23537357d9db4cd72139409ed3862ed26735dc11783f9f6349cc7b9a. May 13 12:36:20.955527 containerd[1521]: time="2025-05-13T12:36:20.955486434Z" level=info msg="StartContainer for \"8c8f3504068665e194b5f099283e467a16e11ab30c7f7417221d9edb3f036e7b\"" May 13 12:36:20.956336 containerd[1521]: time="2025-05-13T12:36:20.956288565Z" level=info msg="connecting to shim 8c8f3504068665e194b5f099283e467a16e11ab30c7f7417221d9edb3f036e7b" address="unix:///run/containerd/s/814ecb19fdf74aeb798eabfa7709322e65e2bd84e63034c7848f04effc0b4778" protocol=ttrpc version=3 May 13 12:36:20.965041 systemd-resolved[1352]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 13 12:36:20.978136 systemd[1]: Started cri-containerd-8c8f3504068665e194b5f099283e467a16e11ab30c7f7417221d9edb3f036e7b.scope - libcontainer container 8c8f3504068665e194b5f099283e467a16e11ab30c7f7417221d9edb3f036e7b. May 13 12:36:20.982666 containerd[1521]: time="2025-05-13T12:36:20.982609656Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7kdvf,Uid:a13363c5-db73-4c15-bc44-8be9849ef5ce,Namespace:calico-system,Attempt:0,} returns sandbox id \"f18c530d23537357d9db4cd72139409ed3862ed26735dc11783f9f6349cc7b9a\"" May 13 12:36:21.004291 containerd[1521]: time="2025-05-13T12:36:21.004260160Z" level=info msg="StartContainer for \"8c8f3504068665e194b5f099283e467a16e11ab30c7f7417221d9edb3f036e7b\" returns successfully" May 13 12:36:21.799764 kubelet[2650]: E0513 12:36:21.799737 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 12:36:21.806991 systemd-networkd[1421]: cali5d98ed21614: Gained IPv6LL May 13 12:36:21.810675 kubelet[2650]: I0513 12:36:21.810626 2650 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-x5r8q" podStartSLOduration=36.810610135 podStartE2EDuration="36.810610135s" podCreationTimestamp="2025-05-13 12:35:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 12:36:21.810552134 +0000 UTC m=+42.245130186" watchObservedRunningTime="2025-05-13 12:36:21.810610135 +0000 UTC m=+42.245188147" May 13 12:36:21.933239 systemd-networkd[1421]: calib4704d907ad: Gained IPv6LL May 13 12:36:22.445148 systemd-networkd[1421]: cali6290af90427: Gained IPv6LL May 13 12:36:22.509191 systemd-networkd[1421]: cali36f8f866d82: Gained IPv6LL May 13 12:36:22.661359 kubelet[2650]: E0513 12:36:22.661138 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 12:36:22.661459 containerd[1521]: time="2025-05-13T12:36:22.661411609Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-z9526,Uid:ca904cd7-1e92-491d-aeb7-3f7c4a33b320,Namespace:kube-system,Attempt:0,}" May 13 12:36:22.661731 containerd[1521]: time="2025-05-13T12:36:22.661425729Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-86c67ddb4-djgp4,Uid:cf6950d9-0ba5-43dc-9de9-e60cd82a7be3,Namespace:calico-apiserver,Attempt:0,}" May 13 12:36:22.780738 systemd-networkd[1421]: calib2c57b8a76d: Link UP May 13 12:36:22.780933 systemd-networkd[1421]: calib2c57b8a76d: Gained carrier May 13 12:36:22.794452 containerd[1521]: 2025-05-13 12:36:22.706 [INFO][4424] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--z9526-eth0 coredns-668d6bf9bc- kube-system ca904cd7-1e92-491d-aeb7-3f7c4a33b320 751 0 2025-05-13 12:35:45 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-z9526 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calib2c57b8a76d [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="b63c7484c36b61197d3fb355fc157fd3331641223be6c5aec93f5d73013e76ab" Namespace="kube-system" Pod="coredns-668d6bf9bc-z9526" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--z9526-" May 13 12:36:22.794452 containerd[1521]: 2025-05-13 12:36:22.706 [INFO][4424] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="b63c7484c36b61197d3fb355fc157fd3331641223be6c5aec93f5d73013e76ab" Namespace="kube-system" Pod="coredns-668d6bf9bc-z9526" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--z9526-eth0" May 13 12:36:22.794452 containerd[1521]: 2025-05-13 12:36:22.737 [INFO][4454] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b63c7484c36b61197d3fb355fc157fd3331641223be6c5aec93f5d73013e76ab" HandleID="k8s-pod-network.b63c7484c36b61197d3fb355fc157fd3331641223be6c5aec93f5d73013e76ab" Workload="localhost-k8s-coredns--668d6bf9bc--z9526-eth0" May 13 12:36:22.794452 containerd[1521]: 2025-05-13 12:36:22.753 [INFO][4454] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b63c7484c36b61197d3fb355fc157fd3331641223be6c5aec93f5d73013e76ab" HandleID="k8s-pod-network.b63c7484c36b61197d3fb355fc157fd3331641223be6c5aec93f5d73013e76ab" Workload="localhost-k8s-coredns--668d6bf9bc--z9526-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400027b170), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-z9526", "timestamp":"2025-05-13 12:36:22.737675508 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 13 12:36:22.794452 containerd[1521]: 2025-05-13 12:36:22.753 [INFO][4454] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 12:36:22.794452 containerd[1521]: 2025-05-13 12:36:22.754 [INFO][4454] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 12:36:22.794452 containerd[1521]: 2025-05-13 12:36:22.754 [INFO][4454] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 13 12:36:22.794452 containerd[1521]: 2025-05-13 12:36:22.755 [INFO][4454] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.b63c7484c36b61197d3fb355fc157fd3331641223be6c5aec93f5d73013e76ab" host="localhost" May 13 12:36:22.794452 containerd[1521]: 2025-05-13 12:36:22.758 [INFO][4454] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 13 12:36:22.794452 containerd[1521]: 2025-05-13 12:36:22.762 [INFO][4454] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 13 12:36:22.794452 containerd[1521]: 2025-05-13 12:36:22.764 [INFO][4454] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 13 12:36:22.794452 containerd[1521]: 2025-05-13 12:36:22.766 [INFO][4454] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 13 12:36:22.794452 containerd[1521]: 2025-05-13 12:36:22.766 [INFO][4454] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b63c7484c36b61197d3fb355fc157fd3331641223be6c5aec93f5d73013e76ab" host="localhost" May 13 12:36:22.794452 containerd[1521]: 2025-05-13 12:36:22.767 [INFO][4454] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.b63c7484c36b61197d3fb355fc157fd3331641223be6c5aec93f5d73013e76ab May 13 12:36:22.794452 containerd[1521]: 2025-05-13 12:36:22.770 [INFO][4454] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b63c7484c36b61197d3fb355fc157fd3331641223be6c5aec93f5d73013e76ab" host="localhost" May 13 12:36:22.794452 containerd[1521]: 2025-05-13 12:36:22.776 [INFO][4454] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.b63c7484c36b61197d3fb355fc157fd3331641223be6c5aec93f5d73013e76ab" host="localhost" May 13 12:36:22.794452 containerd[1521]: 2025-05-13 12:36:22.776 [INFO][4454] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.b63c7484c36b61197d3fb355fc157fd3331641223be6c5aec93f5d73013e76ab" host="localhost" May 13 12:36:22.794452 containerd[1521]: 2025-05-13 12:36:22.776 [INFO][4454] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 12:36:22.794452 containerd[1521]: 2025-05-13 12:36:22.776 [INFO][4454] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="b63c7484c36b61197d3fb355fc157fd3331641223be6c5aec93f5d73013e76ab" HandleID="k8s-pod-network.b63c7484c36b61197d3fb355fc157fd3331641223be6c5aec93f5d73013e76ab" Workload="localhost-k8s-coredns--668d6bf9bc--z9526-eth0" May 13 12:36:22.795670 containerd[1521]: 2025-05-13 12:36:22.778 [INFO][4424] cni-plugin/k8s.go 386: Populated endpoint ContainerID="b63c7484c36b61197d3fb355fc157fd3331641223be6c5aec93f5d73013e76ab" Namespace="kube-system" Pod="coredns-668d6bf9bc-z9526" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--z9526-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--z9526-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"ca904cd7-1e92-491d-aeb7-3f7c4a33b320", ResourceVersion:"751", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 12, 35, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-z9526", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib2c57b8a76d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 12:36:22.795670 containerd[1521]: 2025-05-13 12:36:22.778 [INFO][4424] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="b63c7484c36b61197d3fb355fc157fd3331641223be6c5aec93f5d73013e76ab" Namespace="kube-system" Pod="coredns-668d6bf9bc-z9526" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--z9526-eth0" May 13 12:36:22.795670 containerd[1521]: 2025-05-13 12:36:22.778 [INFO][4424] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib2c57b8a76d ContainerID="b63c7484c36b61197d3fb355fc157fd3331641223be6c5aec93f5d73013e76ab" Namespace="kube-system" Pod="coredns-668d6bf9bc-z9526" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--z9526-eth0" May 13 12:36:22.795670 containerd[1521]: 2025-05-13 12:36:22.780 [INFO][4424] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b63c7484c36b61197d3fb355fc157fd3331641223be6c5aec93f5d73013e76ab" Namespace="kube-system" Pod="coredns-668d6bf9bc-z9526" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--z9526-eth0" May 13 12:36:22.795670 containerd[1521]: 2025-05-13 12:36:22.780 [INFO][4424] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="b63c7484c36b61197d3fb355fc157fd3331641223be6c5aec93f5d73013e76ab" Namespace="kube-system" Pod="coredns-668d6bf9bc-z9526" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--z9526-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--z9526-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"ca904cd7-1e92-491d-aeb7-3f7c4a33b320", ResourceVersion:"751", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 12, 35, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b63c7484c36b61197d3fb355fc157fd3331641223be6c5aec93f5d73013e76ab", Pod:"coredns-668d6bf9bc-z9526", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib2c57b8a76d", MAC:"12:4a:64:dd:ef:e9", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 12:36:22.795670 containerd[1521]: 2025-05-13 12:36:22.791 [INFO][4424] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="b63c7484c36b61197d3fb355fc157fd3331641223be6c5aec93f5d73013e76ab" Namespace="kube-system" Pod="coredns-668d6bf9bc-z9526" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--z9526-eth0" May 13 12:36:22.821659 kubelet[2650]: E0513 12:36:22.821547 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 12:36:22.824032 containerd[1521]: time="2025-05-13T12:36:22.823998221Z" level=info msg="connecting to shim b63c7484c36b61197d3fb355fc157fd3331641223be6c5aec93f5d73013e76ab" address="unix:///run/containerd/s/d6b65f2093bac74ef3e9b41171a70b048928df9025ecc147fb81e3be5e3803cb" namespace=k8s.io protocol=ttrpc version=3 May 13 12:36:22.850058 systemd[1]: Started cri-containerd-b63c7484c36b61197d3fb355fc157fd3331641223be6c5aec93f5d73013e76ab.scope - libcontainer container b63c7484c36b61197d3fb355fc157fd3331641223be6c5aec93f5d73013e76ab. May 13 12:36:22.862303 systemd-resolved[1352]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 13 12:36:22.889873 containerd[1521]: time="2025-05-13T12:36:22.889839221Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-z9526,Uid:ca904cd7-1e92-491d-aeb7-3f7c4a33b320,Namespace:kube-system,Attempt:0,} returns sandbox id \"b63c7484c36b61197d3fb355fc157fd3331641223be6c5aec93f5d73013e76ab\"" May 13 12:36:22.891909 kubelet[2650]: E0513 12:36:22.891718 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 12:36:22.892134 systemd-networkd[1421]: cali9168f36018b: Link UP May 13 12:36:22.893597 systemd-networkd[1421]: cali9168f36018b: Gained carrier May 13 12:36:22.897563 containerd[1521]: time="2025-05-13T12:36:22.897535604Z" level=info msg="CreateContainer within sandbox \"b63c7484c36b61197d3fb355fc157fd3331641223be6c5aec93f5d73013e76ab\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 13 12:36:22.908740 containerd[1521]: 2025-05-13 12:36:22.708 [INFO][4429] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--86c67ddb4--djgp4-eth0 calico-apiserver-86c67ddb4- calico-apiserver cf6950d9-0ba5-43dc-9de9-e60cd82a7be3 759 0 2025-05-13 12:35:51 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:86c67ddb4 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-86c67ddb4-djgp4 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali9168f36018b [] []}} ContainerID="524edc49a1b227f2d871d64653e936da384ae1f2d4dd8055e3f9c131df8aed03" Namespace="calico-apiserver" Pod="calico-apiserver-86c67ddb4-djgp4" WorkloadEndpoint="localhost-k8s-calico--apiserver--86c67ddb4--djgp4-" May 13 12:36:22.908740 containerd[1521]: 2025-05-13 12:36:22.708 [INFO][4429] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="524edc49a1b227f2d871d64653e936da384ae1f2d4dd8055e3f9c131df8aed03" Namespace="calico-apiserver" Pod="calico-apiserver-86c67ddb4-djgp4" WorkloadEndpoint="localhost-k8s-calico--apiserver--86c67ddb4--djgp4-eth0" May 13 12:36:22.908740 containerd[1521]: 2025-05-13 12:36:22.739 [INFO][4456] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="524edc49a1b227f2d871d64653e936da384ae1f2d4dd8055e3f9c131df8aed03" HandleID="k8s-pod-network.524edc49a1b227f2d871d64653e936da384ae1f2d4dd8055e3f9c131df8aed03" Workload="localhost-k8s-calico--apiserver--86c67ddb4--djgp4-eth0" May 13 12:36:22.908740 containerd[1521]: 2025-05-13 12:36:22.754 [INFO][4456] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="524edc49a1b227f2d871d64653e936da384ae1f2d4dd8055e3f9c131df8aed03" HandleID="k8s-pod-network.524edc49a1b227f2d871d64653e936da384ae1f2d4dd8055e3f9c131df8aed03" Workload="localhost-k8s-calico--apiserver--86c67ddb4--djgp4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003ad7f0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-86c67ddb4-djgp4", "timestamp":"2025-05-13 12:36:22.739443452 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 13 12:36:22.908740 containerd[1521]: 2025-05-13 12:36:22.754 [INFO][4456] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 12:36:22.908740 containerd[1521]: 2025-05-13 12:36:22.776 [INFO][4456] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 12:36:22.908740 containerd[1521]: 2025-05-13 12:36:22.776 [INFO][4456] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 13 12:36:22.908740 containerd[1521]: 2025-05-13 12:36:22.857 [INFO][4456] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.524edc49a1b227f2d871d64653e936da384ae1f2d4dd8055e3f9c131df8aed03" host="localhost" May 13 12:36:22.908740 containerd[1521]: 2025-05-13 12:36:22.862 [INFO][4456] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 13 12:36:22.908740 containerd[1521]: 2025-05-13 12:36:22.867 [INFO][4456] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 13 12:36:22.908740 containerd[1521]: 2025-05-13 12:36:22.868 [INFO][4456] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 13 12:36:22.908740 containerd[1521]: 2025-05-13 12:36:22.872 [INFO][4456] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 13 12:36:22.908740 containerd[1521]: 2025-05-13 12:36:22.872 [INFO][4456] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.524edc49a1b227f2d871d64653e936da384ae1f2d4dd8055e3f9c131df8aed03" host="localhost" May 13 12:36:22.908740 containerd[1521]: 2025-05-13 12:36:22.873 [INFO][4456] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.524edc49a1b227f2d871d64653e936da384ae1f2d4dd8055e3f9c131df8aed03 May 13 12:36:22.908740 containerd[1521]: 2025-05-13 12:36:22.878 [INFO][4456] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.524edc49a1b227f2d871d64653e936da384ae1f2d4dd8055e3f9c131df8aed03" host="localhost" May 13 12:36:22.908740 containerd[1521]: 2025-05-13 12:36:22.883 [INFO][4456] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.524edc49a1b227f2d871d64653e936da384ae1f2d4dd8055e3f9c131df8aed03" host="localhost" May 13 12:36:22.908740 containerd[1521]: 2025-05-13 12:36:22.883 [INFO][4456] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.524edc49a1b227f2d871d64653e936da384ae1f2d4dd8055e3f9c131df8aed03" host="localhost" May 13 12:36:22.908740 containerd[1521]: 2025-05-13 12:36:22.883 [INFO][4456] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 12:36:22.908740 containerd[1521]: 2025-05-13 12:36:22.883 [INFO][4456] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="524edc49a1b227f2d871d64653e936da384ae1f2d4dd8055e3f9c131df8aed03" HandleID="k8s-pod-network.524edc49a1b227f2d871d64653e936da384ae1f2d4dd8055e3f9c131df8aed03" Workload="localhost-k8s-calico--apiserver--86c67ddb4--djgp4-eth0" May 13 12:36:22.909228 containerd[1521]: 2025-05-13 12:36:22.888 [INFO][4429] cni-plugin/k8s.go 386: Populated endpoint ContainerID="524edc49a1b227f2d871d64653e936da384ae1f2d4dd8055e3f9c131df8aed03" Namespace="calico-apiserver" Pod="calico-apiserver-86c67ddb4-djgp4" WorkloadEndpoint="localhost-k8s-calico--apiserver--86c67ddb4--djgp4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--86c67ddb4--djgp4-eth0", GenerateName:"calico-apiserver-86c67ddb4-", Namespace:"calico-apiserver", SelfLink:"", UID:"cf6950d9-0ba5-43dc-9de9-e60cd82a7be3", ResourceVersion:"759", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 12, 35, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"86c67ddb4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-86c67ddb4-djgp4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9168f36018b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 12:36:22.909228 containerd[1521]: 2025-05-13 12:36:22.888 [INFO][4429] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.134/32] ContainerID="524edc49a1b227f2d871d64653e936da384ae1f2d4dd8055e3f9c131df8aed03" Namespace="calico-apiserver" Pod="calico-apiserver-86c67ddb4-djgp4" WorkloadEndpoint="localhost-k8s-calico--apiserver--86c67ddb4--djgp4-eth0" May 13 12:36:22.909228 containerd[1521]: 2025-05-13 12:36:22.888 [INFO][4429] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9168f36018b ContainerID="524edc49a1b227f2d871d64653e936da384ae1f2d4dd8055e3f9c131df8aed03" Namespace="calico-apiserver" Pod="calico-apiserver-86c67ddb4-djgp4" WorkloadEndpoint="localhost-k8s-calico--apiserver--86c67ddb4--djgp4-eth0" May 13 12:36:22.909228 containerd[1521]: 2025-05-13 12:36:22.893 [INFO][4429] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="524edc49a1b227f2d871d64653e936da384ae1f2d4dd8055e3f9c131df8aed03" Namespace="calico-apiserver" Pod="calico-apiserver-86c67ddb4-djgp4" WorkloadEndpoint="localhost-k8s-calico--apiserver--86c67ddb4--djgp4-eth0" May 13 12:36:22.909228 containerd[1521]: 2025-05-13 12:36:22.895 [INFO][4429] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="524edc49a1b227f2d871d64653e936da384ae1f2d4dd8055e3f9c131df8aed03" Namespace="calico-apiserver" Pod="calico-apiserver-86c67ddb4-djgp4" WorkloadEndpoint="localhost-k8s-calico--apiserver--86c67ddb4--djgp4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--86c67ddb4--djgp4-eth0", GenerateName:"calico-apiserver-86c67ddb4-", Namespace:"calico-apiserver", SelfLink:"", UID:"cf6950d9-0ba5-43dc-9de9-e60cd82a7be3", ResourceVersion:"759", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 12, 35, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"86c67ddb4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"524edc49a1b227f2d871d64653e936da384ae1f2d4dd8055e3f9c131df8aed03", Pod:"calico-apiserver-86c67ddb4-djgp4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9168f36018b", MAC:"5a:b9:44:f6:ee:d3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 12:36:22.909228 containerd[1521]: 2025-05-13 12:36:22.904 [INFO][4429] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="524edc49a1b227f2d871d64653e936da384ae1f2d4dd8055e3f9c131df8aed03" Namespace="calico-apiserver" Pod="calico-apiserver-86c67ddb4-djgp4" WorkloadEndpoint="localhost-k8s-calico--apiserver--86c67ddb4--djgp4-eth0" May 13 12:36:22.911528 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1048189832.mount: Deactivated successfully. May 13 12:36:22.913966 containerd[1521]: time="2025-05-13T12:36:22.913699740Z" level=info msg="Container 0e4be99d4cd999d7c311fda2e4527dcb58922f551270b2df944284e571f003f4: CDI devices from CRI Config.CDIDevices: []" May 13 12:36:22.923804 containerd[1521]: time="2025-05-13T12:36:22.923758274Z" level=info msg="CreateContainer within sandbox \"b63c7484c36b61197d3fb355fc157fd3331641223be6c5aec93f5d73013e76ab\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0e4be99d4cd999d7c311fda2e4527dcb58922f551270b2df944284e571f003f4\"" May 13 12:36:22.924749 containerd[1521]: time="2025-05-13T12:36:22.924713407Z" level=info msg="StartContainer for \"0e4be99d4cd999d7c311fda2e4527dcb58922f551270b2df944284e571f003f4\"" May 13 12:36:22.926168 containerd[1521]: time="2025-05-13T12:36:22.926070905Z" level=info msg="connecting to shim 0e4be99d4cd999d7c311fda2e4527dcb58922f551270b2df944284e571f003f4" address="unix:///run/containerd/s/d6b65f2093bac74ef3e9b41171a70b048928df9025ecc147fb81e3be5e3803cb" protocol=ttrpc version=3 May 13 12:36:22.951834 systemd[1]: Started cri-containerd-0e4be99d4cd999d7c311fda2e4527dcb58922f551270b2df944284e571f003f4.scope - libcontainer container 0e4be99d4cd999d7c311fda2e4527dcb58922f551270b2df944284e571f003f4. May 13 12:36:22.953557 containerd[1521]: time="2025-05-13T12:36:22.953148667Z" level=info msg="connecting to shim 524edc49a1b227f2d871d64653e936da384ae1f2d4dd8055e3f9c131df8aed03" address="unix:///run/containerd/s/1f7afce77673ec4ab72eb862a615681b184f3882a9a8c4b3f9fa2f0e4ddf4e28" namespace=k8s.io protocol=ttrpc version=3 May 13 12:36:22.985090 systemd[1]: Started cri-containerd-524edc49a1b227f2d871d64653e936da384ae1f2d4dd8055e3f9c131df8aed03.scope - libcontainer container 524edc49a1b227f2d871d64653e936da384ae1f2d4dd8055e3f9c131df8aed03. May 13 12:36:23.000246 containerd[1521]: time="2025-05-13T12:36:22.999978813Z" level=info msg="StartContainer for \"0e4be99d4cd999d7c311fda2e4527dcb58922f551270b2df944284e571f003f4\" returns successfully" May 13 12:36:23.027055 systemd-resolved[1352]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 13 12:36:23.053323 containerd[1521]: time="2025-05-13T12:36:23.053176666Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-86c67ddb4-djgp4,Uid:cf6950d9-0ba5-43dc-9de9-e60cd82a7be3,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"524edc49a1b227f2d871d64653e936da384ae1f2d4dd8055e3f9c131df8aed03\"" May 13 12:36:23.815124 kubelet[2650]: E0513 12:36:23.815090 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 12:36:23.815415 kubelet[2650]: E0513 12:36:23.815395 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 12:36:23.838423 kubelet[2650]: I0513 12:36:23.837384 2650 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-z9526" podStartSLOduration=38.837366367 podStartE2EDuration="38.837366367s" podCreationTimestamp="2025-05-13 12:35:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 12:36:23.827004352 +0000 UTC m=+44.261582404" watchObservedRunningTime="2025-05-13 12:36:23.837366367 +0000 UTC m=+44.271944379" May 13 12:36:23.853089 systemd-networkd[1421]: calib2c57b8a76d: Gained IPv6LL May 13 12:36:24.045191 systemd-networkd[1421]: cali9168f36018b: Gained IPv6LL May 13 12:36:24.816625 kubelet[2650]: E0513 12:36:24.816596 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 12:36:25.464725 containerd[1521]: time="2025-05-13T12:36:25.464683713Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:36:25.465221 containerd[1521]: time="2025-05-13T12:36:25.465189839Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=40247603" May 13 12:36:25.465976 containerd[1521]: time="2025-05-13T12:36:25.465941289Z" level=info msg="ImageCreate event name:\"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:36:25.467759 containerd[1521]: time="2025-05-13T12:36:25.467725151Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:36:25.468559 containerd[1521]: time="2025-05-13T12:36:25.468532481Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"41616801\" in 5.227716712s" May 13 12:36:25.468614 containerd[1521]: time="2025-05-13T12:36:25.468566161Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\"" May 13 12:36:25.469925 containerd[1521]: time="2025-05-13T12:36:25.469805657Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\"" May 13 12:36:25.471516 containerd[1521]: time="2025-05-13T12:36:25.471473838Z" level=info msg="CreateContainer within sandbox \"80f59e36edd591a509f7e0c6423a2a9bac0e5805cd82a0b46d5df2a06375d0dc\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 13 12:36:25.482747 containerd[1521]: time="2025-05-13T12:36:25.482082130Z" level=info msg="Container 4117a380845219e952745cb51535b136d2b2edff1982c7eb5c85631a8e63ad0e: CDI devices from CRI Config.CDIDevices: []" May 13 12:36:25.494105 containerd[1521]: time="2025-05-13T12:36:25.494068479Z" level=info msg="CreateContainer within sandbox \"80f59e36edd591a509f7e0c6423a2a9bac0e5805cd82a0b46d5df2a06375d0dc\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"4117a380845219e952745cb51535b136d2b2edff1982c7eb5c85631a8e63ad0e\"" May 13 12:36:25.495915 containerd[1521]: time="2025-05-13T12:36:25.495350015Z" level=info msg="StartContainer for \"4117a380845219e952745cb51535b136d2b2edff1982c7eb5c85631a8e63ad0e\"" May 13 12:36:25.496210 systemd[1]: Started sshd@12-10.0.0.39:22-10.0.0.1:34108.service - OpenSSH per-connection server daemon (10.0.0.1:34108). May 13 12:36:25.497685 containerd[1521]: time="2025-05-13T12:36:25.497656323Z" level=info msg="connecting to shim 4117a380845219e952745cb51535b136d2b2edff1982c7eb5c85631a8e63ad0e" address="unix:///run/containerd/s/55a33a33325e403db6a883dea562032cd91b52137efdfcd7beb2c0de0f2e67e6" protocol=ttrpc version=3 May 13 12:36:25.531060 systemd[1]: Started cri-containerd-4117a380845219e952745cb51535b136d2b2edff1982c7eb5c85631a8e63ad0e.scope - libcontainer container 4117a380845219e952745cb51535b136d2b2edff1982c7eb5c85631a8e63ad0e. May 13 12:36:25.582528 sshd[4647]: Accepted publickey for core from 10.0.0.1 port 34108 ssh2: RSA SHA256:HV7SwMkgpUcGbG5PTBCNGAhaEvexdMAt2yN/TIbGAFk May 13 12:36:25.585733 sshd-session[4647]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:36:25.600585 containerd[1521]: time="2025-05-13T12:36:25.600553403Z" level=info msg="StartContainer for \"4117a380845219e952745cb51535b136d2b2edff1982c7eb5c85631a8e63ad0e\" returns successfully" May 13 12:36:25.604410 systemd-logind[1502]: New session 13 of user core. May 13 12:36:25.611048 systemd[1]: Started session-13.scope - Session 13 of User core. May 13 12:36:25.797093 sshd[4675]: Connection closed by 10.0.0.1 port 34108 May 13 12:36:25.797974 sshd-session[4647]: pam_unix(sshd:session): session closed for user core May 13 12:36:25.801921 systemd[1]: session-13.scope: Deactivated successfully. May 13 12:36:25.801931 systemd-logind[1502]: Session 13 logged out. Waiting for processes to exit. May 13 12:36:25.803222 systemd[1]: sshd@12-10.0.0.39:22-10.0.0.1:34108.service: Deactivated successfully. May 13 12:36:25.805594 systemd-logind[1502]: Removed session 13. May 13 12:36:25.822652 kubelet[2650]: E0513 12:36:25.822626 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 12:36:25.834539 kubelet[2650]: I0513 12:36:25.834213 2650 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-86c67ddb4-6s9k8" podStartSLOduration=29.60447309 podStartE2EDuration="34.834196509s" podCreationTimestamp="2025-05-13 12:35:51 +0000 UTC" firstStartedPulling="2025-05-13 12:36:20.239699273 +0000 UTC m=+40.674277285" lastFinishedPulling="2025-05-13 12:36:25.469422692 +0000 UTC m=+45.904000704" observedRunningTime="2025-05-13 12:36:25.832716371 +0000 UTC m=+46.267294383" watchObservedRunningTime="2025-05-13 12:36:25.834196509 +0000 UTC m=+46.268774521" May 13 12:36:26.824396 kubelet[2650]: I0513 12:36:26.824261 2650 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 13 12:36:28.677879 containerd[1521]: time="2025-05-13T12:36:28.677574809Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:36:28.678409 containerd[1521]: time="2025-05-13T12:36:28.678385978Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.3: active requests=0, bytes read=32554116" May 13 12:36:28.679017 containerd[1521]: time="2025-05-13T12:36:28.678987385Z" level=info msg="ImageCreate event name:\"sha256:ec7c64189a2fd01b24b044fea1840d441e9884a0df32c2e9d6982cfbbea1f814\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:36:28.683334 containerd[1521]: time="2025-05-13T12:36:28.682201823Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:36:28.683334 containerd[1521]: time="2025-05-13T12:36:28.682951631Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" with image id \"sha256:ec7c64189a2fd01b24b044fea1840d441e9884a0df32c2e9d6982cfbbea1f814\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\", size \"33923266\" in 3.21284121s" May 13 12:36:28.683334 containerd[1521]: time="2025-05-13T12:36:28.682975632Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" returns image reference \"sha256:ec7c64189a2fd01b24b044fea1840d441e9884a0df32c2e9d6982cfbbea1f814\"" May 13 12:36:28.684555 containerd[1521]: time="2025-05-13T12:36:28.684494729Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\"" May 13 12:36:28.692423 containerd[1521]: time="2025-05-13T12:36:28.692032657Z" level=info msg="CreateContainer within sandbox \"d7219a49cff6a678bc354b3cc7172d590cc28e93d4509544b51196bbeef66bd3\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" May 13 12:36:28.701420 containerd[1521]: time="2025-05-13T12:36:28.701382007Z" level=info msg="Container 6083c33837da27653e65ae37cad674de996ec26fef3155c6799b65e8676c95a9: CDI devices from CRI Config.CDIDevices: []" May 13 12:36:28.710117 containerd[1521]: time="2025-05-13T12:36:28.710083348Z" level=info msg="CreateContainer within sandbox \"d7219a49cff6a678bc354b3cc7172d590cc28e93d4509544b51196bbeef66bd3\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"6083c33837da27653e65ae37cad674de996ec26fef3155c6799b65e8676c95a9\"" May 13 12:36:28.711609 containerd[1521]: time="2025-05-13T12:36:28.711575886Z" level=info msg="StartContainer for \"6083c33837da27653e65ae37cad674de996ec26fef3155c6799b65e8676c95a9\"" May 13 12:36:28.712597 containerd[1521]: time="2025-05-13T12:36:28.712569497Z" level=info msg="connecting to shim 6083c33837da27653e65ae37cad674de996ec26fef3155c6799b65e8676c95a9" address="unix:///run/containerd/s/09eb07d0103dc1d8dae5e8f38449bca137f1929cc1684552ad91d890f459b152" protocol=ttrpc version=3 May 13 12:36:28.734050 systemd[1]: Started cri-containerd-6083c33837da27653e65ae37cad674de996ec26fef3155c6799b65e8676c95a9.scope - libcontainer container 6083c33837da27653e65ae37cad674de996ec26fef3155c6799b65e8676c95a9. May 13 12:36:28.768767 containerd[1521]: time="2025-05-13T12:36:28.768724633Z" level=info msg="StartContainer for \"6083c33837da27653e65ae37cad674de996ec26fef3155c6799b65e8676c95a9\" returns successfully" May 13 12:36:28.845435 kubelet[2650]: I0513 12:36:28.845343 2650 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-87fb474fb-wjmnx" podStartSLOduration=27.423679637 podStartE2EDuration="35.845326768s" podCreationTimestamp="2025-05-13 12:35:53 +0000 UTC" firstStartedPulling="2025-05-13 12:36:20.26214903 +0000 UTC m=+40.696727042" lastFinishedPulling="2025-05-13 12:36:28.683796161 +0000 UTC m=+49.118374173" observedRunningTime="2025-05-13 12:36:28.84468748 +0000 UTC m=+49.279265492" watchObservedRunningTime="2025-05-13 12:36:28.845326768 +0000 UTC m=+49.279904780" May 13 12:36:28.874196 containerd[1521]: time="2025-05-13T12:36:28.874158864Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6083c33837da27653e65ae37cad674de996ec26fef3155c6799b65e8676c95a9\" id:\"0d8cd9f08e40046d9acbeba885f17f5de0ede4acdfd93c16be15935c73db24ce\" pid:4754 exited_at:{seconds:1747139788 nanos:873835901}" May 13 12:36:29.140991 kubelet[2650]: I0513 12:36:29.140943 2650 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 13 12:36:30.809173 systemd[1]: Started sshd@13-10.0.0.39:22-10.0.0.1:34122.service - OpenSSH per-connection server daemon (10.0.0.1:34122). May 13 12:36:30.863294 sshd[4767]: Accepted publickey for core from 10.0.0.1 port 34122 ssh2: RSA SHA256:HV7SwMkgpUcGbG5PTBCNGAhaEvexdMAt2yN/TIbGAFk May 13 12:36:30.864807 sshd-session[4767]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:36:30.868646 systemd-logind[1502]: New session 14 of user core. May 13 12:36:30.881055 systemd[1]: Started session-14.scope - Session 14 of User core. May 13 12:36:31.053038 sshd[4769]: Connection closed by 10.0.0.1 port 34122 May 13 12:36:31.053677 sshd-session[4767]: pam_unix(sshd:session): session closed for user core May 13 12:36:31.057302 systemd[1]: sshd@13-10.0.0.39:22-10.0.0.1:34122.service: Deactivated successfully. May 13 12:36:31.058875 systemd[1]: session-14.scope: Deactivated successfully. May 13 12:36:31.060497 systemd-logind[1502]: Session 14 logged out. Waiting for processes to exit. May 13 12:36:31.061872 systemd-logind[1502]: Removed session 14. May 13 12:36:34.770936 containerd[1521]: time="2025-05-13T12:36:34.770594373Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:36:34.771370 containerd[1521]: time="2025-05-13T12:36:34.771204380Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.3: active requests=0, bytes read=7474935" May 13 12:36:34.771965 containerd[1521]: time="2025-05-13T12:36:34.771912147Z" level=info msg="ImageCreate event name:\"sha256:15faf29e8b518d846c91c15785ff89e783d356ea0f2b22826f47a556ea32645b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:36:34.773886 containerd[1521]: time="2025-05-13T12:36:34.773838807Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:36:34.774446 containerd[1521]: time="2025-05-13T12:36:34.774337453Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.3\" with image id \"sha256:15faf29e8b518d846c91c15785ff89e783d356ea0f2b22826f47a556ea32645b\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\", size \"8844117\" in 6.089807603s" May 13 12:36:34.774446 containerd[1521]: time="2025-05-13T12:36:34.774366293Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\" returns image reference \"sha256:15faf29e8b518d846c91c15785ff89e783d356ea0f2b22826f47a556ea32645b\"" May 13 12:36:34.775365 containerd[1521]: time="2025-05-13T12:36:34.775340623Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" May 13 12:36:34.780270 containerd[1521]: time="2025-05-13T12:36:34.780240755Z" level=info msg="CreateContainer within sandbox \"f18c530d23537357d9db4cd72139409ed3862ed26735dc11783f9f6349cc7b9a\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" May 13 12:36:34.788264 containerd[1521]: time="2025-05-13T12:36:34.787291269Z" level=info msg="Container 03cedde3e42472b48b2a023210741541723acee9083b5d8a37289036ef92ead5: CDI devices from CRI Config.CDIDevices: []" May 13 12:36:34.794811 containerd[1521]: time="2025-05-13T12:36:34.794759668Z" level=info msg="CreateContainer within sandbox \"f18c530d23537357d9db4cd72139409ed3862ed26735dc11783f9f6349cc7b9a\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"03cedde3e42472b48b2a023210741541723acee9083b5d8a37289036ef92ead5\"" May 13 12:36:34.795184 containerd[1521]: time="2025-05-13T12:36:34.795159752Z" level=info msg="StartContainer for \"03cedde3e42472b48b2a023210741541723acee9083b5d8a37289036ef92ead5\"" May 13 12:36:34.796544 containerd[1521]: time="2025-05-13T12:36:34.796512326Z" level=info msg="connecting to shim 03cedde3e42472b48b2a023210741541723acee9083b5d8a37289036ef92ead5" address="unix:///run/containerd/s/6da9093c197202b473bc9a3fc33084d8ebf61f337bb1e3d68936fba924a5d51f" protocol=ttrpc version=3 May 13 12:36:34.824034 systemd[1]: Started cri-containerd-03cedde3e42472b48b2a023210741541723acee9083b5d8a37289036ef92ead5.scope - libcontainer container 03cedde3e42472b48b2a023210741541723acee9083b5d8a37289036ef92ead5. May 13 12:36:34.862671 containerd[1521]: time="2025-05-13T12:36:34.862624063Z" level=info msg="StartContainer for \"03cedde3e42472b48b2a023210741541723acee9083b5d8a37289036ef92ead5\" returns successfully" May 13 12:36:35.816394 containerd[1521]: time="2025-05-13T12:36:35.816322027Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:36:35.817500 containerd[1521]: time="2025-05-13T12:36:35.817464239Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=77" May 13 12:36:35.818891 containerd[1521]: time="2025-05-13T12:36:35.818780173Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"41616801\" in 1.043406829s" May 13 12:36:35.818891 containerd[1521]: time="2025-05-13T12:36:35.818813053Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\"" May 13 12:36:35.819995 containerd[1521]: time="2025-05-13T12:36:35.819667942Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\"" May 13 12:36:35.821080 containerd[1521]: time="2025-05-13T12:36:35.821005516Z" level=info msg="CreateContainer within sandbox \"524edc49a1b227f2d871d64653e936da384ae1f2d4dd8055e3f9c131df8aed03\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 13 12:36:35.830684 containerd[1521]: time="2025-05-13T12:36:35.830635256Z" level=info msg="Container 5c782fa2f472e74a57dad56d9bdf91609044e3e710f0e28beac2d15cb6c3ed90: CDI devices from CRI Config.CDIDevices: []" May 13 12:36:35.841174 containerd[1521]: time="2025-05-13T12:36:35.841142845Z" level=info msg="CreateContainer within sandbox \"524edc49a1b227f2d871d64653e936da384ae1f2d4dd8055e3f9c131df8aed03\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"5c782fa2f472e74a57dad56d9bdf91609044e3e710f0e28beac2d15cb6c3ed90\"" May 13 12:36:35.841726 containerd[1521]: time="2025-05-13T12:36:35.841498569Z" level=info msg="StartContainer for \"5c782fa2f472e74a57dad56d9bdf91609044e3e710f0e28beac2d15cb6c3ed90\"" May 13 12:36:35.842687 containerd[1521]: time="2025-05-13T12:36:35.842660301Z" level=info msg="connecting to shim 5c782fa2f472e74a57dad56d9bdf91609044e3e710f0e28beac2d15cb6c3ed90" address="unix:///run/containerd/s/1f7afce77673ec4ab72eb862a615681b184f3882a9a8c4b3f9fa2f0e4ddf4e28" protocol=ttrpc version=3 May 13 12:36:35.867047 systemd[1]: Started cri-containerd-5c782fa2f472e74a57dad56d9bdf91609044e3e710f0e28beac2d15cb6c3ed90.scope - libcontainer container 5c782fa2f472e74a57dad56d9bdf91609044e3e710f0e28beac2d15cb6c3ed90. May 13 12:36:35.903508 containerd[1521]: time="2025-05-13T12:36:35.903416132Z" level=info msg="StartContainer for \"5c782fa2f472e74a57dad56d9bdf91609044e3e710f0e28beac2d15cb6c3ed90\" returns successfully" May 13 12:36:36.064596 systemd[1]: Started sshd@14-10.0.0.39:22-10.0.0.1:33304.service - OpenSSH per-connection server daemon (10.0.0.1:33304). May 13 12:36:36.128170 sshd[4860]: Accepted publickey for core from 10.0.0.1 port 33304 ssh2: RSA SHA256:HV7SwMkgpUcGbG5PTBCNGAhaEvexdMAt2yN/TIbGAFk May 13 12:36:36.129431 sshd-session[4860]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:36:36.133444 systemd-logind[1502]: New session 15 of user core. May 13 12:36:36.143064 systemd[1]: Started session-15.scope - Session 15 of User core. May 13 12:36:36.300304 sshd[4862]: Connection closed by 10.0.0.1 port 33304 May 13 12:36:36.300208 sshd-session[4860]: pam_unix(sshd:session): session closed for user core May 13 12:36:36.304817 systemd[1]: sshd@14-10.0.0.39:22-10.0.0.1:33304.service: Deactivated successfully. May 13 12:36:36.306728 systemd[1]: session-15.scope: Deactivated successfully. May 13 12:36:36.308040 systemd-logind[1502]: Session 15 logged out. Waiting for processes to exit. May 13 12:36:36.309844 systemd-logind[1502]: Removed session 15. May 13 12:36:36.866857 kubelet[2650]: I0513 12:36:36.866791 2650 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-86c67ddb4-djgp4" podStartSLOduration=33.103059333 podStartE2EDuration="45.866777173s" podCreationTimestamp="2025-05-13 12:35:51 +0000 UTC" firstStartedPulling="2025-05-13 12:36:23.055818261 +0000 UTC m=+43.490396273" lastFinishedPulling="2025-05-13 12:36:35.819536101 +0000 UTC m=+56.254114113" observedRunningTime="2025-05-13 12:36:36.865892404 +0000 UTC m=+57.300470376" watchObservedRunningTime="2025-05-13 12:36:36.866777173 +0000 UTC m=+57.301355185" May 13 12:36:41.320455 systemd[1]: Started sshd@15-10.0.0.39:22-10.0.0.1:33314.service - OpenSSH per-connection server daemon (10.0.0.1:33314). May 13 12:36:41.376944 sshd[4885]: Accepted publickey for core from 10.0.0.1 port 33314 ssh2: RSA SHA256:HV7SwMkgpUcGbG5PTBCNGAhaEvexdMAt2yN/TIbGAFk May 13 12:36:41.378188 sshd-session[4885]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:36:41.382114 systemd-logind[1502]: New session 16 of user core. May 13 12:36:41.391536 systemd[1]: Started session-16.scope - Session 16 of User core. May 13 12:36:41.563003 sshd[4887]: Connection closed by 10.0.0.1 port 33314 May 13 12:36:41.563328 sshd-session[4885]: pam_unix(sshd:session): session closed for user core May 13 12:36:41.566741 systemd[1]: sshd@15-10.0.0.39:22-10.0.0.1:33314.service: Deactivated successfully. May 13 12:36:41.568614 systemd[1]: session-16.scope: Deactivated successfully. May 13 12:36:41.569484 systemd-logind[1502]: Session 16 logged out. Waiting for processes to exit. May 13 12:36:41.571300 systemd-logind[1502]: Removed session 16. May 13 12:36:42.814445 containerd[1521]: time="2025-05-13T12:36:42.814030263Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:36:42.814841 containerd[1521]: time="2025-05-13T12:36:42.814498548Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3: active requests=0, bytes read=13124299" May 13 12:36:42.815306 containerd[1521]: time="2025-05-13T12:36:42.815282315Z" level=info msg="ImageCreate event name:\"sha256:a91b1f00752edc175f270a01b33683fa80818734aa2274388785eaf3364315dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:36:42.817539 containerd[1521]: time="2025-05-13T12:36:42.817506897Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:36:42.818231 containerd[1521]: time="2025-05-13T12:36:42.818199383Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" with image id \"sha256:a91b1f00752edc175f270a01b33683fa80818734aa2274388785eaf3364315dc\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\", size \"14493433\" in 6.998498201s" May 13 12:36:42.818285 containerd[1521]: time="2025-05-13T12:36:42.818231384Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" returns image reference \"sha256:a91b1f00752edc175f270a01b33683fa80818734aa2274388785eaf3364315dc\"" May 13 12:36:42.821313 containerd[1521]: time="2025-05-13T12:36:42.821097051Z" level=info msg="CreateContainer within sandbox \"f18c530d23537357d9db4cd72139409ed3862ed26735dc11783f9f6349cc7b9a\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" May 13 12:36:42.829415 containerd[1521]: time="2025-05-13T12:36:42.829388090Z" level=info msg="Container 0cab744f6a4953db102ccd1689b10f57fcc5fc4ba765b6f9b3499131aa7bbad9: CDI devices from CRI Config.CDIDevices: []" May 13 12:36:42.834304 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2873658298.mount: Deactivated successfully. May 13 12:36:42.837602 containerd[1521]: time="2025-05-13T12:36:42.837488088Z" level=info msg="CreateContainer within sandbox \"f18c530d23537357d9db4cd72139409ed3862ed26735dc11783f9f6349cc7b9a\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"0cab744f6a4953db102ccd1689b10f57fcc5fc4ba765b6f9b3499131aa7bbad9\"" May 13 12:36:42.838090 containerd[1521]: time="2025-05-13T12:36:42.838065893Z" level=info msg="StartContainer for \"0cab744f6a4953db102ccd1689b10f57fcc5fc4ba765b6f9b3499131aa7bbad9\"" May 13 12:36:42.839549 containerd[1521]: time="2025-05-13T12:36:42.839426866Z" level=info msg="connecting to shim 0cab744f6a4953db102ccd1689b10f57fcc5fc4ba765b6f9b3499131aa7bbad9" address="unix:///run/containerd/s/6da9093c197202b473bc9a3fc33084d8ebf61f337bb1e3d68936fba924a5d51f" protocol=ttrpc version=3 May 13 12:36:42.867050 systemd[1]: Started cri-containerd-0cab744f6a4953db102ccd1689b10f57fcc5fc4ba765b6f9b3499131aa7bbad9.scope - libcontainer container 0cab744f6a4953db102ccd1689b10f57fcc5fc4ba765b6f9b3499131aa7bbad9. May 13 12:36:42.902762 containerd[1521]: time="2025-05-13T12:36:42.902655151Z" level=info msg="StartContainer for \"0cab744f6a4953db102ccd1689b10f57fcc5fc4ba765b6f9b3499131aa7bbad9\" returns successfully" May 13 12:36:43.736895 kubelet[2650]: I0513 12:36:43.736784 2650 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 May 13 12:36:43.736895 kubelet[2650]: I0513 12:36:43.736843 2650 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock May 13 12:36:43.887473 kubelet[2650]: I0513 12:36:43.887316 2650 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-7kdvf" podStartSLOduration=29.052273769 podStartE2EDuration="50.887296125s" podCreationTimestamp="2025-05-13 12:35:53 +0000 UTC" firstStartedPulling="2025-05-13 12:36:20.983819353 +0000 UTC m=+41.418397325" lastFinishedPulling="2025-05-13 12:36:42.818841669 +0000 UTC m=+63.253419681" observedRunningTime="2025-05-13 12:36:43.885291866 +0000 UTC m=+64.319869878" watchObservedRunningTime="2025-05-13 12:36:43.887296125 +0000 UTC m=+64.321874177" May 13 12:36:46.146740 containerd[1521]: time="2025-05-13T12:36:46.146679160Z" level=info msg="TaskExit event in podsandbox handler container_id:\"85923af824a4c40b165c5d70019b24c3923f38d0f266fc89000c8034aec0720d\" id:\"f3c963ba162704c416ef4b2cf2876a7712432f3e05e156438dfa76f190280fde\" pid:4951 exited_at:{seconds:1747139806 nanos:146375677}" May 13 12:36:46.148871 kubelet[2650]: E0513 12:36:46.148705 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 12:36:46.578154 systemd[1]: Started sshd@16-10.0.0.39:22-10.0.0.1:59440.service - OpenSSH per-connection server daemon (10.0.0.1:59440). May 13 12:36:46.628953 sshd[4964]: Accepted publickey for core from 10.0.0.1 port 59440 ssh2: RSA SHA256:HV7SwMkgpUcGbG5PTBCNGAhaEvexdMAt2yN/TIbGAFk May 13 12:36:46.630353 sshd-session[4964]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:36:46.634275 systemd-logind[1502]: New session 17 of user core. May 13 12:36:46.644032 systemd[1]: Started session-17.scope - Session 17 of User core. May 13 12:36:46.773147 sshd[4966]: Connection closed by 10.0.0.1 port 59440 May 13 12:36:46.774039 sshd-session[4964]: pam_unix(sshd:session): session closed for user core May 13 12:36:46.777512 systemd[1]: sshd@16-10.0.0.39:22-10.0.0.1:59440.service: Deactivated successfully. May 13 12:36:46.779881 systemd[1]: session-17.scope: Deactivated successfully. May 13 12:36:46.782160 systemd-logind[1502]: Session 17 logged out. Waiting for processes to exit. May 13 12:36:46.783377 systemd-logind[1502]: Removed session 17. May 13 12:36:51.785158 systemd[1]: Started sshd@17-10.0.0.39:22-10.0.0.1:59444.service - OpenSSH per-connection server daemon (10.0.0.1:59444). May 13 12:36:51.847379 sshd[4982]: Accepted publickey for core from 10.0.0.1 port 59444 ssh2: RSA SHA256:HV7SwMkgpUcGbG5PTBCNGAhaEvexdMAt2yN/TIbGAFk May 13 12:36:51.849141 sshd-session[4982]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:36:51.854028 systemd-logind[1502]: New session 18 of user core. May 13 12:36:51.865044 systemd[1]: Started session-18.scope - Session 18 of User core. May 13 12:36:52.054940 sshd[4984]: Connection closed by 10.0.0.1 port 59444 May 13 12:36:52.056978 sshd-session[4982]: pam_unix(sshd:session): session closed for user core May 13 12:36:52.068197 systemd[1]: sshd@17-10.0.0.39:22-10.0.0.1:59444.service: Deactivated successfully. May 13 12:36:52.071351 systemd[1]: session-18.scope: Deactivated successfully. May 13 12:36:52.072168 systemd-logind[1502]: Session 18 logged out. Waiting for processes to exit. May 13 12:36:52.074430 systemd[1]: Started sshd@18-10.0.0.39:22-10.0.0.1:59452.service - OpenSSH per-connection server daemon (10.0.0.1:59452). May 13 12:36:52.077360 systemd-logind[1502]: Removed session 18. May 13 12:36:52.132802 sshd[4998]: Accepted publickey for core from 10.0.0.1 port 59452 ssh2: RSA SHA256:HV7SwMkgpUcGbG5PTBCNGAhaEvexdMAt2yN/TIbGAFk May 13 12:36:52.134032 sshd-session[4998]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:36:52.137936 systemd-logind[1502]: New session 19 of user core. May 13 12:36:52.147048 systemd[1]: Started session-19.scope - Session 19 of User core. May 13 12:36:52.402527 sshd[5000]: Connection closed by 10.0.0.1 port 59452 May 13 12:36:52.402836 sshd-session[4998]: pam_unix(sshd:session): session closed for user core May 13 12:36:52.414990 systemd[1]: sshd@18-10.0.0.39:22-10.0.0.1:59452.service: Deactivated successfully. May 13 12:36:52.416889 systemd[1]: session-19.scope: Deactivated successfully. May 13 12:36:52.418469 systemd-logind[1502]: Session 19 logged out. Waiting for processes to exit. May 13 12:36:52.420881 systemd[1]: Started sshd@19-10.0.0.39:22-10.0.0.1:59458.service - OpenSSH per-connection server daemon (10.0.0.1:59458). May 13 12:36:52.422905 systemd-logind[1502]: Removed session 19. May 13 12:36:52.479414 sshd[5011]: Accepted publickey for core from 10.0.0.1 port 59458 ssh2: RSA SHA256:HV7SwMkgpUcGbG5PTBCNGAhaEvexdMAt2yN/TIbGAFk May 13 12:36:52.480604 sshd-session[5011]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:36:52.484270 systemd-logind[1502]: New session 20 of user core. May 13 12:36:52.495047 systemd[1]: Started session-20.scope - Session 20 of User core. May 13 12:36:53.240347 sshd[5013]: Connection closed by 10.0.0.1 port 59458 May 13 12:36:53.240681 sshd-session[5011]: pam_unix(sshd:session): session closed for user core May 13 12:36:53.255042 systemd[1]: sshd@19-10.0.0.39:22-10.0.0.1:59458.service: Deactivated successfully. May 13 12:36:53.256602 systemd[1]: session-20.scope: Deactivated successfully. May 13 12:36:53.258947 systemd-logind[1502]: Session 20 logged out. Waiting for processes to exit. May 13 12:36:53.266035 systemd[1]: Started sshd@20-10.0.0.39:22-10.0.0.1:42090.service - OpenSSH per-connection server daemon (10.0.0.1:42090). May 13 12:36:53.266759 systemd-logind[1502]: Removed session 20. May 13 12:36:53.330741 sshd[5033]: Accepted publickey for core from 10.0.0.1 port 42090 ssh2: RSA SHA256:HV7SwMkgpUcGbG5PTBCNGAhaEvexdMAt2yN/TIbGAFk May 13 12:36:53.331875 sshd-session[5033]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:36:53.353827 systemd-logind[1502]: New session 21 of user core. May 13 12:36:53.364139 systemd[1]: Started session-21.scope - Session 21 of User core. May 13 12:36:53.674430 sshd[5035]: Connection closed by 10.0.0.1 port 42090 May 13 12:36:53.674777 sshd-session[5033]: pam_unix(sshd:session): session closed for user core May 13 12:36:53.686707 systemd[1]: sshd@20-10.0.0.39:22-10.0.0.1:42090.service: Deactivated successfully. May 13 12:36:53.690256 systemd[1]: session-21.scope: Deactivated successfully. May 13 12:36:53.692547 systemd-logind[1502]: Session 21 logged out. Waiting for processes to exit. May 13 12:36:53.696695 systemd[1]: Started sshd@21-10.0.0.39:22-10.0.0.1:42102.service - OpenSSH per-connection server daemon (10.0.0.1:42102). May 13 12:36:53.698399 systemd-logind[1502]: Removed session 21. May 13 12:36:53.751573 sshd[5047]: Accepted publickey for core from 10.0.0.1 port 42102 ssh2: RSA SHA256:HV7SwMkgpUcGbG5PTBCNGAhaEvexdMAt2yN/TIbGAFk May 13 12:36:53.752840 sshd-session[5047]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:36:53.757415 systemd-logind[1502]: New session 22 of user core. May 13 12:36:53.768082 systemd[1]: Started session-22.scope - Session 22 of User core. May 13 12:36:53.919433 sshd[5049]: Connection closed by 10.0.0.1 port 42102 May 13 12:36:53.919961 sshd-session[5047]: pam_unix(sshd:session): session closed for user core May 13 12:36:53.923415 systemd-logind[1502]: Session 22 logged out. Waiting for processes to exit. May 13 12:36:53.923761 systemd[1]: sshd@21-10.0.0.39:22-10.0.0.1:42102.service: Deactivated successfully. May 13 12:36:53.926379 systemd[1]: session-22.scope: Deactivated successfully. May 13 12:36:53.928884 systemd-logind[1502]: Removed session 22. May 13 12:36:54.661369 kubelet[2650]: E0513 12:36:54.661286 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 12:36:56.661215 kubelet[2650]: E0513 12:36:56.661186 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 12:36:58.863672 containerd[1521]: time="2025-05-13T12:36:58.863626153Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6083c33837da27653e65ae37cad674de996ec26fef3155c6799b65e8676c95a9\" id:\"659ae16e2814b7ff42f8d1f649b272143e7805da0929c304c312d48553736625\" pid:5079 exited_at:{seconds:1747139818 nanos:863425166}" May 13 12:36:58.935105 systemd[1]: Started sshd@22-10.0.0.39:22-10.0.0.1:42106.service - OpenSSH per-connection server daemon (10.0.0.1:42106). May 13 12:36:58.990493 sshd[5090]: Accepted publickey for core from 10.0.0.1 port 42106 ssh2: RSA SHA256:HV7SwMkgpUcGbG5PTBCNGAhaEvexdMAt2yN/TIbGAFk May 13 12:36:58.991598 sshd-session[5090]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:36:58.995443 systemd-logind[1502]: New session 23 of user core. May 13 12:36:59.002051 systemd[1]: Started session-23.scope - Session 23 of User core. May 13 12:36:59.117996 sshd[5092]: Connection closed by 10.0.0.1 port 42106 May 13 12:36:59.118239 sshd-session[5090]: pam_unix(sshd:session): session closed for user core May 13 12:36:59.121632 systemd[1]: sshd@22-10.0.0.39:22-10.0.0.1:42106.service: Deactivated successfully. May 13 12:36:59.123638 systemd[1]: session-23.scope: Deactivated successfully. May 13 12:36:59.125594 systemd-logind[1502]: Session 23 logged out. Waiting for processes to exit. May 13 12:36:59.127073 systemd-logind[1502]: Removed session 23. May 13 12:37:01.676402 containerd[1521]: time="2025-05-13T12:37:01.676360555Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6083c33837da27653e65ae37cad674de996ec26fef3155c6799b65e8676c95a9\" id:\"5429b728984d111a2b60f323efa04e18d580a3980a45ed651245b5c8652f6a50\" pid:5116 exited_at:{seconds:1747139821 nanos:673930652}" May 13 12:37:04.141680 systemd[1]: Started sshd@23-10.0.0.39:22-10.0.0.1:50708.service - OpenSSH per-connection server daemon (10.0.0.1:50708). May 13 12:37:04.189001 sshd[5127]: Accepted publickey for core from 10.0.0.1 port 50708 ssh2: RSA SHA256:HV7SwMkgpUcGbG5PTBCNGAhaEvexdMAt2yN/TIbGAFk May 13 12:37:04.190286 sshd-session[5127]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:37:04.195365 systemd-logind[1502]: New session 24 of user core. May 13 12:37:04.207052 systemd[1]: Started session-24.scope - Session 24 of User core. May 13 12:37:04.340410 sshd[5129]: Connection closed by 10.0.0.1 port 50708 May 13 12:37:04.340744 sshd-session[5127]: pam_unix(sshd:session): session closed for user core May 13 12:37:04.344885 systemd-logind[1502]: Session 24 logged out. Waiting for processes to exit. May 13 12:37:04.345087 systemd[1]: sshd@23-10.0.0.39:22-10.0.0.1:50708.service: Deactivated successfully. May 13 12:37:04.347412 systemd[1]: session-24.scope: Deactivated successfully. May 13 12:37:04.348726 systemd-logind[1502]: Removed session 24. May 13 12:37:06.660716 kubelet[2650]: E0513 12:37:06.660680 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 12:37:09.355149 systemd[1]: Started sshd@24-10.0.0.39:22-10.0.0.1:50724.service - OpenSSH per-connection server daemon (10.0.0.1:50724). May 13 12:37:09.417438 sshd[5144]: Accepted publickey for core from 10.0.0.1 port 50724 ssh2: RSA SHA256:HV7SwMkgpUcGbG5PTBCNGAhaEvexdMAt2yN/TIbGAFk May 13 12:37:09.418686 sshd-session[5144]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:37:09.422478 systemd-logind[1502]: New session 25 of user core. May 13 12:37:09.434098 systemd[1]: Started session-25.scope - Session 25 of User core. May 13 12:37:09.565618 sshd[5146]: Connection closed by 10.0.0.1 port 50724 May 13 12:37:09.565963 sshd-session[5144]: pam_unix(sshd:session): session closed for user core May 13 12:37:09.569929 systemd[1]: sshd@24-10.0.0.39:22-10.0.0.1:50724.service: Deactivated successfully. May 13 12:37:09.571815 systemd[1]: session-25.scope: Deactivated successfully. May 13 12:37:09.572619 systemd-logind[1502]: Session 25 logged out. Waiting for processes to exit. May 13 12:37:09.573977 systemd-logind[1502]: Removed session 25. May 13 12:37:14.582231 systemd[1]: Started sshd@25-10.0.0.39:22-10.0.0.1:38986.service - OpenSSH per-connection server daemon (10.0.0.1:38986). May 13 12:37:14.646627 sshd[5159]: Accepted publickey for core from 10.0.0.1 port 38986 ssh2: RSA SHA256:HV7SwMkgpUcGbG5PTBCNGAhaEvexdMAt2yN/TIbGAFk May 13 12:37:14.652430 sshd-session[5159]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:37:14.659469 systemd-logind[1502]: New session 26 of user core. May 13 12:37:14.670119 systemd[1]: Started session-26.scope - Session 26 of User core. May 13 12:37:14.802497 sshd[5161]: Connection closed by 10.0.0.1 port 38986 May 13 12:37:14.802843 sshd-session[5159]: pam_unix(sshd:session): session closed for user core May 13 12:37:14.805752 systemd[1]: sshd@25-10.0.0.39:22-10.0.0.1:38986.service: Deactivated successfully. May 13 12:37:14.807459 systemd[1]: session-26.scope: Deactivated successfully. May 13 12:37:14.808216 systemd-logind[1502]: Session 26 logged out. Waiting for processes to exit. May 13 12:37:14.809993 systemd-logind[1502]: Removed session 26. May 13 12:37:16.145290 containerd[1521]: time="2025-05-13T12:37:16.145236084Z" level=info msg="TaskExit event in podsandbox handler container_id:\"85923af824a4c40b165c5d70019b24c3923f38d0f266fc89000c8034aec0720d\" id:\"5cd7542c2424cbc78322770c9d1277667aff5ee7617b50845eeece5df48e4263\" pid:5186 exited_at:{seconds:1747139836 nanos:144957339}" May 13 12:37:19.661002 kubelet[2650]: E0513 12:37:19.660881 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 12:37:19.815184 systemd[1]: Started sshd@26-10.0.0.39:22-10.0.0.1:38990.service - OpenSSH per-connection server daemon (10.0.0.1:38990). May 13 12:37:19.872932 sshd[5199]: Accepted publickey for core from 10.0.0.1 port 38990 ssh2: RSA SHA256:HV7SwMkgpUcGbG5PTBCNGAhaEvexdMAt2yN/TIbGAFk May 13 12:37:19.873675 sshd-session[5199]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:37:19.877315 systemd-logind[1502]: New session 27 of user core. May 13 12:37:19.891049 systemd[1]: Started session-27.scope - Session 27 of User core. May 13 12:37:20.014873 sshd[5201]: Connection closed by 10.0.0.1 port 38990 May 13 12:37:20.015332 sshd-session[5199]: pam_unix(sshd:session): session closed for user core May 13 12:37:20.018632 systemd[1]: sshd@26-10.0.0.39:22-10.0.0.1:38990.service: Deactivated successfully. May 13 12:37:20.020536 systemd[1]: session-27.scope: Deactivated successfully. May 13 12:37:20.022443 systemd-logind[1502]: Session 27 logged out. Waiting for processes to exit. May 13 12:37:20.023426 systemd-logind[1502]: Removed session 27.