May 13 23:47:57.981079 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] May 13 23:47:57.981104 kernel: Linux version 6.6.89-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT Tue May 13 22:16:18 -00 2025 May 13 23:47:57.981114 kernel: KASLR enabled May 13 23:47:57.981120 kernel: efi: EFI v2.7 by EDK II May 13 23:47:57.981126 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb4ff018 ACPI 2.0=0xd93ef018 RNG=0xd93efa18 MEMRESERVE=0xd91e1f18 May 13 23:47:57.981132 kernel: random: crng init done May 13 23:47:57.981139 kernel: Kernel is locked down from EFI Secure Boot; see man kernel_lockdown.7 May 13 23:47:57.981146 kernel: secureboot: Secure boot enabled May 13 23:47:57.981151 kernel: ACPI: Early table checksum verification disabled May 13 23:47:57.981158 kernel: ACPI: RSDP 0x00000000D93EF018 000024 (v02 BOCHS ) May 13 23:47:57.981166 kernel: ACPI: XSDT 0x00000000D93EFF18 000064 (v01 BOCHS BXPC 00000001 01000013) May 13 23:47:57.981172 kernel: ACPI: FACP 0x00000000D93EFB18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) May 13 23:47:57.981178 kernel: ACPI: DSDT 0x00000000D93ED018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 13 23:47:57.981184 kernel: ACPI: APIC 0x00000000D93EFC98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) May 13 23:47:57.981191 kernel: ACPI: PPTT 0x00000000D93EF098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) May 13 23:47:57.981200 kernel: ACPI: GTDT 0x00000000D93EF818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 13 23:47:57.981206 kernel: ACPI: MCFG 0x00000000D93EFA98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 23:47:57.981213 kernel: ACPI: SPCR 0x00000000D93EF918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 13 23:47:57.981219 kernel: ACPI: DBG2 0x00000000D93EF998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) May 13 23:47:57.981226 kernel: ACPI: IORT 0x00000000D93EF198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 13 23:47:57.981232 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 May 13 23:47:57.981238 kernel: NUMA: Failed to initialise from firmware May 13 23:47:57.981244 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] May 13 23:47:57.981250 kernel: NUMA: NODE_DATA [mem 0xdc729800-0xdc72efff] May 13 23:47:57.981256 kernel: Zone ranges: May 13 23:47:57.981263 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] May 13 23:47:57.981269 kernel: DMA32 empty May 13 23:47:57.981275 kernel: Normal empty May 13 23:47:57.981281 kernel: Movable zone start for each node May 13 23:47:57.981287 kernel: Early memory node ranges May 13 23:47:57.981293 kernel: node 0: [mem 0x0000000040000000-0x00000000d93effff] May 13 23:47:57.981300 kernel: node 0: [mem 0x00000000d93f0000-0x00000000d972ffff] May 13 23:47:57.981306 kernel: node 0: [mem 0x00000000d9730000-0x00000000dcbfffff] May 13 23:47:57.981312 kernel: node 0: [mem 0x00000000dcc00000-0x00000000dcfdffff] May 13 23:47:57.981318 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] May 13 23:47:57.981324 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] May 13 23:47:57.981331 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges May 13 23:47:57.981338 kernel: psci: probing for conduit method from ACPI. May 13 23:47:57.981360 kernel: psci: PSCIv1.1 detected in firmware. May 13 23:47:57.981366 kernel: psci: Using standard PSCI v0.2 function IDs May 13 23:47:57.981375 kernel: psci: Trusted OS migration not required May 13 23:47:57.981383 kernel: psci: SMC Calling Convention v1.1 May 13 23:47:57.981389 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) May 13 23:47:57.981396 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 May 13 23:47:57.981404 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 May 13 23:47:57.981411 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 May 13 23:47:57.981419 kernel: Detected PIPT I-cache on CPU0 May 13 23:47:57.981426 kernel: CPU features: detected: GIC system register CPU interface May 13 23:47:57.981433 kernel: CPU features: detected: Hardware dirty bit management May 13 23:47:57.981440 kernel: CPU features: detected: Spectre-v4 May 13 23:47:57.981447 kernel: CPU features: detected: Spectre-BHB May 13 23:47:57.981454 kernel: CPU features: kernel page table isolation forced ON by KASLR May 13 23:47:57.981461 kernel: CPU features: detected: Kernel page table isolation (KPTI) May 13 23:47:57.981486 kernel: CPU features: detected: ARM erratum 1418040 May 13 23:47:57.981496 kernel: CPU features: detected: SSBS not fully self-synchronizing May 13 23:47:57.981503 kernel: alternatives: applying boot alternatives May 13 23:47:57.981510 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=3174b2682629aa8ad4069807ed6fd62c10f62266ee1e150a1104f2a2fb6489b5 May 13 23:47:57.981518 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 13 23:47:57.981525 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 13 23:47:57.981531 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 13 23:47:57.981538 kernel: Fallback order for Node 0: 0 May 13 23:47:57.981545 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 May 13 23:47:57.981560 kernel: Policy zone: DMA May 13 23:47:57.981567 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 13 23:47:57.981576 kernel: software IO TLB: area num 4. May 13 23:47:57.981582 kernel: software IO TLB: mapped [mem 0x00000000d2800000-0x00000000d6800000] (64MB) May 13 23:47:57.981589 kernel: Memory: 2385752K/2572288K available (10368K kernel code, 2186K rwdata, 8100K rodata, 38464K init, 897K bss, 186536K reserved, 0K cma-reserved) May 13 23:47:57.981596 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 13 23:47:57.981602 kernel: rcu: Preemptible hierarchical RCU implementation. May 13 23:47:57.981609 kernel: rcu: RCU event tracing is enabled. May 13 23:47:57.981616 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 13 23:47:57.981623 kernel: Trampoline variant of Tasks RCU enabled. May 13 23:47:57.981629 kernel: Tracing variant of Tasks RCU enabled. May 13 23:47:57.981636 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 13 23:47:57.981642 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 13 23:47:57.981649 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 May 13 23:47:57.981657 kernel: GICv3: 256 SPIs implemented May 13 23:47:57.981664 kernel: GICv3: 0 Extended SPIs implemented May 13 23:47:57.981670 kernel: Root IRQ handler: gic_handle_irq May 13 23:47:57.981676 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI May 13 23:47:57.981683 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 May 13 23:47:57.981689 kernel: ITS [mem 0x08080000-0x0809ffff] May 13 23:47:57.981696 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) May 13 23:47:57.981702 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) May 13 23:47:57.981709 kernel: GICv3: using LPI property table @0x00000000400f0000 May 13 23:47:57.981716 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 May 13 23:47:57.981729 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 13 23:47:57.981738 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 23:47:57.981745 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). May 13 23:47:57.981751 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns May 13 23:47:57.981758 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns May 13 23:47:57.981786 kernel: arm-pv: using stolen time PV May 13 23:47:57.981820 kernel: Console: colour dummy device 80x25 May 13 23:47:57.981827 kernel: ACPI: Core revision 20230628 May 13 23:47:57.981834 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) May 13 23:47:57.981841 kernel: pid_max: default: 32768 minimum: 301 May 13 23:47:57.981848 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 13 23:47:57.981856 kernel: landlock: Up and running. May 13 23:47:57.981863 kernel: SELinux: Initializing. May 13 23:47:57.981870 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 13 23:47:57.981877 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 13 23:47:57.981884 kernel: ACPI PPTT: PPTT table found, but unable to locate core 3 (3) May 13 23:47:57.981891 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 13 23:47:57.981898 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 13 23:47:57.981904 kernel: rcu: Hierarchical SRCU implementation. May 13 23:47:57.981911 kernel: rcu: Max phase no-delay instances is 400. May 13 23:47:57.981919 kernel: Platform MSI: ITS@0x8080000 domain created May 13 23:47:57.981925 kernel: PCI/MSI: ITS@0x8080000 domain created May 13 23:47:57.981932 kernel: Remapping and enabling EFI services. May 13 23:47:57.981938 kernel: smp: Bringing up secondary CPUs ... May 13 23:47:57.981945 kernel: Detected PIPT I-cache on CPU1 May 13 23:47:57.981951 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 May 13 23:47:57.981958 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 May 13 23:47:57.981965 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 23:47:57.981972 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] May 13 23:47:57.981979 kernel: Detected PIPT I-cache on CPU2 May 13 23:47:57.981988 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 May 13 23:47:57.981995 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 May 13 23:47:57.982007 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 23:47:57.982016 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] May 13 23:47:57.982023 kernel: Detected PIPT I-cache on CPU3 May 13 23:47:57.982031 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 May 13 23:47:57.982038 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 May 13 23:47:57.982045 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 23:47:57.982052 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] May 13 23:47:57.982059 kernel: smp: Brought up 1 node, 4 CPUs May 13 23:47:57.982067 kernel: SMP: Total of 4 processors activated. May 13 23:47:57.982076 kernel: CPU features: detected: 32-bit EL0 Support May 13 23:47:57.982083 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence May 13 23:47:57.982091 kernel: CPU features: detected: Common not Private translations May 13 23:47:57.982098 kernel: CPU features: detected: CRC32 instructions May 13 23:47:57.982105 kernel: CPU features: detected: Enhanced Virtualization Traps May 13 23:47:57.982113 kernel: CPU features: detected: RCpc load-acquire (LDAPR) May 13 23:47:57.982122 kernel: CPU features: detected: LSE atomic instructions May 13 23:47:57.982134 kernel: CPU features: detected: Privileged Access Never May 13 23:47:57.982141 kernel: CPU features: detected: RAS Extension Support May 13 23:47:57.982149 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) May 13 23:47:57.982156 kernel: CPU: All CPU(s) started at EL1 May 13 23:47:57.982163 kernel: alternatives: applying system-wide alternatives May 13 23:47:57.982171 kernel: devtmpfs: initialized May 13 23:47:57.982178 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 13 23:47:57.982185 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 13 23:47:57.982196 kernel: pinctrl core: initialized pinctrl subsystem May 13 23:47:57.982203 kernel: SMBIOS 3.0.0 present. May 13 23:47:57.982210 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 May 13 23:47:57.982220 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 13 23:47:57.982231 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations May 13 23:47:57.982238 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations May 13 23:47:57.982245 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations May 13 23:47:57.982252 kernel: audit: initializing netlink subsys (disabled) May 13 23:47:57.982260 kernel: audit: type=2000 audit(0.021:1): state=initialized audit_enabled=0 res=1 May 13 23:47:57.982269 kernel: thermal_sys: Registered thermal governor 'step_wise' May 13 23:47:57.982276 kernel: cpuidle: using governor menu May 13 23:47:57.982285 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. May 13 23:47:57.982292 kernel: ASID allocator initialised with 32768 entries May 13 23:47:57.982299 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 13 23:47:57.982307 kernel: Serial: AMBA PL011 UART driver May 13 23:47:57.982315 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL May 13 23:47:57.982322 kernel: Modules: 0 pages in range for non-PLT usage May 13 23:47:57.982332 kernel: Modules: 509232 pages in range for PLT usage May 13 23:47:57.982345 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 13 23:47:57.982354 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page May 13 23:47:57.982361 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages May 13 23:47:57.982368 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page May 13 23:47:57.982375 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 13 23:47:57.982382 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page May 13 23:47:57.982389 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages May 13 23:47:57.982396 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page May 13 23:47:57.982403 kernel: ACPI: Added _OSI(Module Device) May 13 23:47:57.982411 kernel: ACPI: Added _OSI(Processor Device) May 13 23:47:57.982418 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 13 23:47:57.982435 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 13 23:47:57.982442 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 13 23:47:57.982449 kernel: ACPI: Interpreter enabled May 13 23:47:57.982456 kernel: ACPI: Using GIC for interrupt routing May 13 23:47:57.982463 kernel: ACPI: MCFG table detected, 1 entries May 13 23:47:57.982470 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA May 13 23:47:57.982480 kernel: printk: console [ttyAMA0] enabled May 13 23:47:57.982489 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 13 23:47:57.982682 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 13 23:47:57.982762 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] May 13 23:47:57.982830 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] May 13 23:47:57.982899 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 May 13 23:47:57.982973 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] May 13 23:47:57.982985 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] May 13 23:47:57.982996 kernel: PCI host bridge to bus 0000:00 May 13 23:47:57.983070 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] May 13 23:47:57.983133 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] May 13 23:47:57.983194 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] May 13 23:47:57.983260 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 13 23:47:57.983368 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 May 13 23:47:57.983450 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 May 13 23:47:57.983534 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] May 13 23:47:57.983618 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] May 13 23:47:57.983687 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] May 13 23:47:57.983756 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] May 13 23:47:57.983824 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] May 13 23:47:57.983893 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] May 13 23:47:57.983955 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] May 13 23:47:57.984021 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] May 13 23:47:57.984083 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] May 13 23:47:57.984092 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 May 13 23:47:57.984099 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 May 13 23:47:57.984106 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 May 13 23:47:57.984114 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 May 13 23:47:57.984121 kernel: iommu: Default domain type: Translated May 13 23:47:57.984128 kernel: iommu: DMA domain TLB invalidation policy: strict mode May 13 23:47:57.984138 kernel: efivars: Registered efivars operations May 13 23:47:57.984145 kernel: vgaarb: loaded May 13 23:47:57.984152 kernel: clocksource: Switched to clocksource arch_sys_counter May 13 23:47:57.984160 kernel: VFS: Disk quotas dquot_6.6.0 May 13 23:47:57.984167 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 13 23:47:57.984174 kernel: pnp: PnP ACPI init May 13 23:47:57.984254 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved May 13 23:47:57.984265 kernel: pnp: PnP ACPI: found 1 devices May 13 23:47:57.984274 kernel: NET: Registered PF_INET protocol family May 13 23:47:57.984282 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 13 23:47:57.984289 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 13 23:47:57.984298 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 13 23:47:57.984307 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 13 23:47:57.984315 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 13 23:47:57.984324 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 13 23:47:57.984332 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 13 23:47:57.984341 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 13 23:47:57.984351 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 13 23:47:57.984359 kernel: PCI: CLS 0 bytes, default 64 May 13 23:47:57.984366 kernel: kvm [1]: HYP mode not available May 13 23:47:57.984373 kernel: Initialise system trusted keyrings May 13 23:47:57.984380 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 13 23:47:57.984387 kernel: Key type asymmetric registered May 13 23:47:57.984394 kernel: Asymmetric key parser 'x509' registered May 13 23:47:57.984401 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) May 13 23:47:57.984410 kernel: io scheduler mq-deadline registered May 13 23:47:57.984420 kernel: io scheduler kyber registered May 13 23:47:57.984427 kernel: io scheduler bfq registered May 13 23:47:57.984434 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 May 13 23:47:57.984442 kernel: ACPI: button: Power Button [PWRB] May 13 23:47:57.984449 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 May 13 23:47:57.984529 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) May 13 23:47:57.984539 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 13 23:47:57.984546 kernel: thunder_xcv, ver 1.0 May 13 23:47:57.984562 kernel: thunder_bgx, ver 1.0 May 13 23:47:57.984572 kernel: nicpf, ver 1.0 May 13 23:47:57.984580 kernel: nicvf, ver 1.0 May 13 23:47:57.984661 kernel: rtc-efi rtc-efi.0: registered as rtc0 May 13 23:47:57.984729 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-05-13T23:47:57 UTC (1747180077) May 13 23:47:57.984739 kernel: hid: raw HID events driver (C) Jiri Kosina May 13 23:47:57.984746 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available May 13 23:47:57.984754 kernel: watchdog: Delayed init of the lockup detector failed: -19 May 13 23:47:57.984761 kernel: watchdog: Hard watchdog permanently disabled May 13 23:47:57.984771 kernel: NET: Registered PF_INET6 protocol family May 13 23:47:57.984778 kernel: Segment Routing with IPv6 May 13 23:47:57.984785 kernel: In-situ OAM (IOAM) with IPv6 May 13 23:47:57.984792 kernel: NET: Registered PF_PACKET protocol family May 13 23:47:57.984800 kernel: Key type dns_resolver registered May 13 23:47:57.984807 kernel: registered taskstats version 1 May 13 23:47:57.984814 kernel: Loading compiled-in X.509 certificates May 13 23:47:57.984821 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.89-flatcar: 568a15bbab977599d8f910f319ba50c03c8a57bd' May 13 23:47:57.984828 kernel: Key type .fscrypt registered May 13 23:47:57.984837 kernel: Key type fscrypt-provisioning registered May 13 23:47:57.984844 kernel: ima: No TPM chip found, activating TPM-bypass! May 13 23:47:57.984851 kernel: ima: Allocated hash algorithm: sha1 May 13 23:47:57.984858 kernel: ima: No architecture policies found May 13 23:47:57.984883 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) May 13 23:47:57.984891 kernel: clk: Disabling unused clocks May 13 23:47:57.984898 kernel: Freeing unused kernel memory: 38464K May 13 23:47:57.984906 kernel: Run /init as init process May 13 23:47:57.984913 kernel: with arguments: May 13 23:47:57.984922 kernel: /init May 13 23:47:57.984929 kernel: with environment: May 13 23:47:57.984936 kernel: HOME=/ May 13 23:47:57.984944 kernel: TERM=linux May 13 23:47:57.984950 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 13 23:47:57.984959 systemd[1]: Successfully made /usr/ read-only. May 13 23:47:57.984969 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 13 23:47:57.984977 systemd[1]: Detected virtualization kvm. May 13 23:47:57.984986 systemd[1]: Detected architecture arm64. May 13 23:47:57.984993 systemd[1]: Running in initrd. May 13 23:47:57.985001 systemd[1]: No hostname configured, using default hostname. May 13 23:47:57.985009 systemd[1]: Hostname set to . May 13 23:47:57.985017 systemd[1]: Initializing machine ID from VM UUID. May 13 23:47:57.985025 systemd[1]: Queued start job for default target initrd.target. May 13 23:47:57.985033 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 13 23:47:57.985040 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 13 23:47:57.985050 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 13 23:47:57.985059 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 13 23:47:57.985067 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 13 23:47:57.985075 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 13 23:47:57.985084 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 13 23:47:57.985092 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 13 23:47:57.985100 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 13 23:47:57.985109 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 13 23:47:57.985118 systemd[1]: Reached target paths.target - Path Units. May 13 23:47:57.985125 systemd[1]: Reached target slices.target - Slice Units. May 13 23:47:57.985133 systemd[1]: Reached target swap.target - Swaps. May 13 23:47:57.985141 systemd[1]: Reached target timers.target - Timer Units. May 13 23:47:57.985148 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 13 23:47:57.985156 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 13 23:47:57.985164 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 13 23:47:57.985174 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 13 23:47:57.985182 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 13 23:47:57.985189 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 13 23:47:57.985197 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 13 23:47:57.985205 systemd[1]: Reached target sockets.target - Socket Units. May 13 23:47:57.985212 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 13 23:47:57.985221 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 13 23:47:57.985228 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 13 23:47:57.985236 systemd[1]: Starting systemd-fsck-usr.service... May 13 23:47:57.985245 systemd[1]: Starting systemd-journald.service - Journal Service... May 13 23:47:57.985253 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 13 23:47:57.985261 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 23:47:57.985269 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 13 23:47:57.985277 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 13 23:47:57.985285 systemd[1]: Finished systemd-fsck-usr.service. May 13 23:47:57.985295 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 13 23:47:57.985303 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 13 23:47:57.985330 systemd-journald[235]: Collecting audit messages is disabled. May 13 23:47:57.985355 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 13 23:47:57.985366 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 13 23:47:57.985377 systemd-journald[235]: Journal started May 13 23:47:57.985397 systemd-journald[235]: Runtime Journal (/run/log/journal/ebb2b7edf4454c2ba3b37d21f94109e3) is 5.9M, max 47.3M, 41.4M free. May 13 23:47:57.972119 systemd-modules-load[238]: Inserted module 'overlay' May 13 23:47:57.987999 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 13 23:47:57.989122 systemd[1]: Started systemd-journald.service - Journal Service. May 13 23:47:57.990630 kernel: Bridge firewalling registered May 13 23:47:57.991058 systemd-modules-load[238]: Inserted module 'br_netfilter' May 13 23:47:58.001994 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 13 23:47:58.005643 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 13 23:47:58.007078 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 13 23:47:58.009782 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 13 23:47:58.023241 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 13 23:47:58.024667 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 13 23:47:58.026761 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 13 23:47:58.029444 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 23:47:58.032900 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 13 23:47:58.034937 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 13 23:47:58.050449 dracut-cmdline[277]: dracut-dracut-053 May 13 23:47:58.053406 dracut-cmdline[277]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=3174b2682629aa8ad4069807ed6fd62c10f62266ee1e150a1104f2a2fb6489b5 May 13 23:47:58.083532 systemd-resolved[278]: Positive Trust Anchors: May 13 23:47:58.083548 systemd-resolved[278]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 13 23:47:58.083661 systemd-resolved[278]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 13 23:47:58.091682 systemd-resolved[278]: Defaulting to hostname 'linux'. May 13 23:47:58.096289 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 13 23:47:58.097411 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 13 23:47:58.144580 kernel: SCSI subsystem initialized May 13 23:47:58.149568 kernel: Loading iSCSI transport class v2.0-870. May 13 23:47:58.159569 kernel: iscsi: registered transport (tcp) May 13 23:47:58.173610 kernel: iscsi: registered transport (qla4xxx) May 13 23:47:58.173669 kernel: QLogic iSCSI HBA Driver May 13 23:47:58.217159 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 13 23:47:58.219353 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 13 23:47:58.248928 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 13 23:47:58.248993 kernel: device-mapper: uevent: version 1.0.3 May 13 23:47:58.249861 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 13 23:47:58.296584 kernel: raid6: neonx8 gen() 15766 MB/s May 13 23:47:58.313577 kernel: raid6: neonx4 gen() 15804 MB/s May 13 23:47:58.330567 kernel: raid6: neonx2 gen() 13208 MB/s May 13 23:47:58.347568 kernel: raid6: neonx1 gen() 10479 MB/s May 13 23:47:58.364568 kernel: raid6: int64x8 gen() 6786 MB/s May 13 23:47:58.381568 kernel: raid6: int64x4 gen() 7349 MB/s May 13 23:47:58.398568 kernel: raid6: int64x2 gen() 6109 MB/s May 13 23:47:58.415568 kernel: raid6: int64x1 gen() 5053 MB/s May 13 23:47:58.415583 kernel: raid6: using algorithm neonx4 gen() 15804 MB/s May 13 23:47:58.432572 kernel: raid6: .... xor() 12424 MB/s, rmw enabled May 13 23:47:58.432587 kernel: raid6: using neon recovery algorithm May 13 23:47:58.437787 kernel: xor: measuring software checksum speed May 13 23:47:58.437807 kernel: 8regs : 21601 MB/sec May 13 23:47:58.444745 kernel: 32regs : 21687 MB/sec May 13 23:47:58.445794 kernel: arm64_neon : 503 MB/sec May 13 23:47:58.445808 kernel: xor: using function: 32regs (21687 MB/sec) May 13 23:47:58.501575 kernel: Btrfs loaded, zoned=no, fsverity=no May 13 23:47:58.522577 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 13 23:47:58.527754 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 13 23:47:58.555508 systemd-udevd[462]: Using default interface naming scheme 'v255'. May 13 23:47:58.559288 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 13 23:47:58.562433 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 13 23:47:58.587174 dracut-pre-trigger[469]: rd.md=0: removing MD RAID activation May 13 23:47:58.620939 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 13 23:47:58.623220 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 13 23:47:58.680389 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 13 23:47:58.682702 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 13 23:47:58.706369 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 13 23:47:58.709148 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 13 23:47:58.710696 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 13 23:47:58.713147 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 13 23:47:58.716008 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 13 23:47:58.742285 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues May 13 23:47:58.750145 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 13 23:47:58.749518 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 13 23:47:58.759597 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 13 23:47:58.759650 kernel: GPT:9289727 != 19775487 May 13 23:47:58.760635 kernel: GPT:Alternate GPT header not at the end of the disk. May 13 23:47:58.760668 kernel: GPT:9289727 != 19775487 May 13 23:47:58.760688 kernel: GPT: Use GNU Parted to correct GPT errors. May 13 23:47:58.761663 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 23:47:58.768042 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 13 23:47:58.768152 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 23:47:58.771340 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 13 23:47:58.774070 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 13 23:47:58.774170 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 13 23:47:58.776789 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 13 23:47:58.778383 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 23:47:58.802567 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (509) May 13 23:47:58.808576 kernel: BTRFS: device fsid ee830c17-a93d-4109-bd12-3fec8ef6763d devid 1 transid 41 /dev/vda3 scanned by (udev-worker) (524) May 13 23:47:58.810432 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 13 23:47:58.819635 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 13 23:47:58.831959 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 13 23:47:58.839514 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 13 23:47:58.845702 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 13 23:47:58.846625 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 13 23:47:58.849200 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 13 23:47:58.852296 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 13 23:47:58.875572 disk-uuid[551]: Primary Header is updated. May 13 23:47:58.875572 disk-uuid[551]: Secondary Entries is updated. May 13 23:47:58.875572 disk-uuid[551]: Secondary Header is updated. May 13 23:47:58.882859 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 23:47:58.891987 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 23:47:59.918642 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 23:47:59.918729 disk-uuid[556]: The operation has completed successfully. May 13 23:47:59.940712 systemd[1]: disk-uuid.service: Deactivated successfully. May 13 23:47:59.940812 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 13 23:47:59.969612 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 13 23:47:59.982719 sh[572]: Success May 13 23:48:00.013468 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" May 13 23:48:00.052418 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 13 23:48:00.055089 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 13 23:48:00.069643 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 13 23:48:00.080620 kernel: BTRFS info (device dm-0): first mount of filesystem ee830c17-a93d-4109-bd12-3fec8ef6763d May 13 23:48:00.080662 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm May 13 23:48:00.080672 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 13 23:48:00.082818 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 13 23:48:00.082833 kernel: BTRFS info (device dm-0): using free space tree May 13 23:48:00.086679 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 13 23:48:00.087919 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 13 23:48:00.088769 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 13 23:48:00.091153 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 13 23:48:00.119963 kernel: BTRFS info (device vda6): first mount of filesystem e7b30525-8b14-4004-ad68-68a99b3959db May 13 23:48:00.120025 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 13 23:48:00.120037 kernel: BTRFS info (device vda6): using free space tree May 13 23:48:00.124574 kernel: BTRFS info (device vda6): auto enabling async discard May 13 23:48:00.128604 kernel: BTRFS info (device vda6): last unmount of filesystem e7b30525-8b14-4004-ad68-68a99b3959db May 13 23:48:00.131411 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 13 23:48:00.133623 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 13 23:48:00.195665 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 13 23:48:00.198340 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 13 23:48:00.242803 systemd-networkd[754]: lo: Link UP May 13 23:48:00.242814 systemd-networkd[754]: lo: Gained carrier May 13 23:48:00.243675 systemd-networkd[754]: Enumeration completed May 13 23:48:00.243813 systemd[1]: Started systemd-networkd.service - Network Configuration. May 13 23:48:00.244056 systemd-networkd[754]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 23:48:00.244060 systemd-networkd[754]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 13 23:48:00.245291 systemd-networkd[754]: eth0: Link UP May 13 23:48:00.245294 systemd-networkd[754]: eth0: Gained carrier May 13 23:48:00.245301 systemd-networkd[754]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 23:48:00.245454 systemd[1]: Reached target network.target - Network. May 13 23:48:00.254594 ignition[668]: Ignition 2.20.0 May 13 23:48:00.254607 ignition[668]: Stage: fetch-offline May 13 23:48:00.254640 ignition[668]: no configs at "/usr/lib/ignition/base.d" May 13 23:48:00.254649 ignition[668]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 23:48:00.254882 ignition[668]: parsed url from cmdline: "" May 13 23:48:00.254886 ignition[668]: no config URL provided May 13 23:48:00.254890 ignition[668]: reading system config file "/usr/lib/ignition/user.ign" May 13 23:48:00.254897 ignition[668]: no config at "/usr/lib/ignition/user.ign" May 13 23:48:00.254937 ignition[668]: op(1): [started] loading QEMU firmware config module May 13 23:48:00.254943 ignition[668]: op(1): executing: "modprobe" "qemu_fw_cfg" May 13 23:48:00.266114 ignition[668]: op(1): [finished] loading QEMU firmware config module May 13 23:48:00.272629 systemd-networkd[754]: eth0: DHCPv4 address 10.0.0.82/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 13 23:48:00.309154 ignition[668]: parsing config with SHA512: c10ee9b881c3ad7020de83c9196a85efb96c487c394333b1f3687f9559cdbf63d5cc5201e9debf16257cee648b07084e2a2c14fb1817f36ad69946e07927ee70 May 13 23:48:00.314055 unknown[668]: fetched base config from "system" May 13 23:48:00.314065 unknown[668]: fetched user config from "qemu" May 13 23:48:00.315023 systemd-resolved[278]: Detected conflict on linux IN A 10.0.0.82 May 13 23:48:00.315435 ignition[668]: fetch-offline: fetch-offline passed May 13 23:48:00.315032 systemd-resolved[278]: Hostname conflict, changing published hostname from 'linux' to 'linux10'. May 13 23:48:00.316184 ignition[668]: Ignition finished successfully May 13 23:48:00.318886 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 13 23:48:00.320030 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 13 23:48:00.320792 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 13 23:48:00.340734 ignition[769]: Ignition 2.20.0 May 13 23:48:00.340746 ignition[769]: Stage: kargs May 13 23:48:00.340908 ignition[769]: no configs at "/usr/lib/ignition/base.d" May 13 23:48:00.340918 ignition[769]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 23:48:00.341773 ignition[769]: kargs: kargs passed May 13 23:48:00.341818 ignition[769]: Ignition finished successfully May 13 23:48:00.345459 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 13 23:48:00.347243 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 13 23:48:00.373949 ignition[778]: Ignition 2.20.0 May 13 23:48:00.373966 ignition[778]: Stage: disks May 13 23:48:00.374126 ignition[778]: no configs at "/usr/lib/ignition/base.d" May 13 23:48:00.374136 ignition[778]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 23:48:00.376054 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 13 23:48:00.375032 ignition[778]: disks: disks passed May 13 23:48:00.377916 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 13 23:48:00.375077 ignition[778]: Ignition finished successfully May 13 23:48:00.378856 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 13 23:48:00.380147 systemd[1]: Reached target local-fs.target - Local File Systems. May 13 23:48:00.381637 systemd[1]: Reached target sysinit.target - System Initialization. May 13 23:48:00.382816 systemd[1]: Reached target basic.target - Basic System. May 13 23:48:00.385260 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 13 23:48:00.407272 systemd-fsck[788]: ROOT: clean, 14/553520 files, 52654/553472 blocks May 13 23:48:00.410716 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 13 23:48:00.412533 systemd[1]: Mounting sysroot.mount - /sysroot... May 13 23:48:00.485577 kernel: EXT4-fs (vda9): mounted filesystem 9f8d74e6-c079-469f-823a-18a62077a2c7 r/w with ordered data mode. Quota mode: none. May 13 23:48:00.485909 systemd[1]: Mounted sysroot.mount - /sysroot. May 13 23:48:00.487043 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 13 23:48:00.491659 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 13 23:48:00.493115 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 13 23:48:00.493902 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 13 23:48:00.493943 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 13 23:48:00.493966 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 13 23:48:00.514228 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 13 23:48:00.517049 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 13 23:48:00.520864 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (796) May 13 23:48:00.520903 kernel: BTRFS info (device vda6): first mount of filesystem e7b30525-8b14-4004-ad68-68a99b3959db May 13 23:48:00.520920 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 13 23:48:00.520930 kernel: BTRFS info (device vda6): using free space tree May 13 23:48:00.523586 kernel: BTRFS info (device vda6): auto enabling async discard May 13 23:48:00.525129 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 13 23:48:00.574894 initrd-setup-root[821]: cut: /sysroot/etc/passwd: No such file or directory May 13 23:48:00.578306 initrd-setup-root[828]: cut: /sysroot/etc/group: No such file or directory May 13 23:48:00.582117 initrd-setup-root[835]: cut: /sysroot/etc/shadow: No such file or directory May 13 23:48:00.585686 initrd-setup-root[842]: cut: /sysroot/etc/gshadow: No such file or directory May 13 23:48:00.692957 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 13 23:48:00.694903 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 13 23:48:00.696379 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 13 23:48:00.722654 kernel: BTRFS info (device vda6): last unmount of filesystem e7b30525-8b14-4004-ad68-68a99b3959db May 13 23:48:00.744835 ignition[910]: INFO : Ignition 2.20.0 May 13 23:48:00.744835 ignition[910]: INFO : Stage: mount May 13 23:48:00.746185 ignition[910]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 23:48:00.746185 ignition[910]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 23:48:00.746185 ignition[910]: INFO : mount: mount passed May 13 23:48:00.746185 ignition[910]: INFO : Ignition finished successfully May 13 23:48:00.748668 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 13 23:48:00.751110 systemd[1]: Starting ignition-files.service - Ignition (files)... May 13 23:48:00.752124 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 13 23:48:01.079763 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 13 23:48:01.081248 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 13 23:48:01.098576 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (924) May 13 23:48:01.100581 kernel: BTRFS info (device vda6): first mount of filesystem e7b30525-8b14-4004-ad68-68a99b3959db May 13 23:48:01.100602 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 13 23:48:01.100620 kernel: BTRFS info (device vda6): using free space tree May 13 23:48:01.103588 kernel: BTRFS info (device vda6): auto enabling async discard May 13 23:48:01.104766 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 13 23:48:01.128626 ignition[941]: INFO : Ignition 2.20.0 May 13 23:48:01.128626 ignition[941]: INFO : Stage: files May 13 23:48:01.128626 ignition[941]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 23:48:01.128626 ignition[941]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 23:48:01.135742 ignition[941]: DEBUG : files: compiled without relabeling support, skipping May 13 23:48:01.135742 ignition[941]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 13 23:48:01.135742 ignition[941]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 13 23:48:01.135742 ignition[941]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 13 23:48:01.135742 ignition[941]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 13 23:48:01.135742 ignition[941]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 13 23:48:01.135742 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" May 13 23:48:01.135742 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 May 13 23:48:01.132643 unknown[941]: wrote ssh authorized keys file for user: core May 13 23:48:01.234462 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 13 23:48:01.645773 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" May 13 23:48:01.647800 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" May 13 23:48:01.647800 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" May 13 23:48:01.647800 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" May 13 23:48:01.647800 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" May 13 23:48:01.647800 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 13 23:48:01.647800 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 13 23:48:01.647800 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 13 23:48:01.647800 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 13 23:48:01.647800 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" May 13 23:48:01.647800 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 13 23:48:01.647800 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" May 13 23:48:01.647800 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" May 13 23:48:01.647800 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" May 13 23:48:01.647800 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-arm64.raw: attempt #1 May 13 23:48:01.976638 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK May 13 23:48:02.006738 systemd-networkd[754]: eth0: Gained IPv6LL May 13 23:48:02.301443 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" May 13 23:48:02.301443 ignition[941]: INFO : files: op(b): [started] processing unit "prepare-helm.service" May 13 23:48:02.304809 ignition[941]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 13 23:48:02.304809 ignition[941]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 13 23:48:02.304809 ignition[941]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" May 13 23:48:02.304809 ignition[941]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" May 13 23:48:02.304809 ignition[941]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 13 23:48:02.304809 ignition[941]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 13 23:48:02.304809 ignition[941]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" May 13 23:48:02.304809 ignition[941]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" May 13 23:48:02.323968 ignition[941]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" May 13 23:48:02.327512 ignition[941]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 13 23:48:02.328626 ignition[941]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" May 13 23:48:02.328626 ignition[941]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" May 13 23:48:02.328626 ignition[941]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" May 13 23:48:02.328626 ignition[941]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" May 13 23:48:02.328626 ignition[941]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" May 13 23:48:02.328626 ignition[941]: INFO : files: files passed May 13 23:48:02.328626 ignition[941]: INFO : Ignition finished successfully May 13 23:48:02.330216 systemd[1]: Finished ignition-files.service - Ignition (files). May 13 23:48:02.333704 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 13 23:48:02.349216 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 13 23:48:02.353134 systemd[1]: ignition-quench.service: Deactivated successfully. May 13 23:48:02.353254 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 13 23:48:02.356769 initrd-setup-root-after-ignition[971]: grep: /sysroot/oem/oem-release: No such file or directory May 13 23:48:02.360541 initrd-setup-root-after-ignition[973]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 13 23:48:02.360541 initrd-setup-root-after-ignition[973]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 13 23:48:02.363303 initrd-setup-root-after-ignition[977]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 13 23:48:02.364358 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 13 23:48:02.367217 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 13 23:48:02.368906 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 13 23:48:02.422095 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 13 23:48:02.422203 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 13 23:48:02.424055 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 13 23:48:02.425301 systemd[1]: Reached target initrd.target - Initrd Default Target. May 13 23:48:02.426836 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 13 23:48:02.427695 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 13 23:48:02.459678 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 13 23:48:02.462154 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 13 23:48:02.484073 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 13 23:48:02.485152 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 13 23:48:02.487054 systemd[1]: Stopped target timers.target - Timer Units. May 13 23:48:02.488427 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 13 23:48:02.488581 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 13 23:48:02.490696 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 13 23:48:02.492328 systemd[1]: Stopped target basic.target - Basic System. May 13 23:48:02.493701 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 13 23:48:02.495117 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 13 23:48:02.496694 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 13 23:48:02.498401 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 13 23:48:02.500045 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 13 23:48:02.501718 systemd[1]: Stopped target sysinit.target - System Initialization. May 13 23:48:02.503325 systemd[1]: Stopped target local-fs.target - Local File Systems. May 13 23:48:02.504758 systemd[1]: Stopped target swap.target - Swaps. May 13 23:48:02.506090 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 13 23:48:02.506227 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 13 23:48:02.508172 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 13 23:48:02.509706 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 13 23:48:02.511250 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 13 23:48:02.511350 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 13 23:48:02.513016 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 13 23:48:02.513157 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 13 23:48:02.515543 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 13 23:48:02.515677 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 13 23:48:02.517272 systemd[1]: Stopped target paths.target - Path Units. May 13 23:48:02.518582 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 13 23:48:02.523627 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 13 23:48:02.524663 systemd[1]: Stopped target slices.target - Slice Units. May 13 23:48:02.526345 systemd[1]: Stopped target sockets.target - Socket Units. May 13 23:48:02.527663 systemd[1]: iscsid.socket: Deactivated successfully. May 13 23:48:02.527765 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 13 23:48:02.529090 systemd[1]: iscsiuio.socket: Deactivated successfully. May 13 23:48:02.529170 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 13 23:48:02.530432 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 13 23:48:02.530560 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 13 23:48:02.532051 systemd[1]: ignition-files.service: Deactivated successfully. May 13 23:48:02.532157 systemd[1]: Stopped ignition-files.service - Ignition (files). May 13 23:48:02.534233 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 13 23:48:02.535379 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 13 23:48:02.535518 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 13 23:48:02.538100 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 13 23:48:02.539180 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 13 23:48:02.539322 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 13 23:48:02.540691 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 13 23:48:02.540807 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 13 23:48:02.548678 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 13 23:48:02.548778 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 13 23:48:02.555735 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 13 23:48:02.558495 ignition[998]: INFO : Ignition 2.20.0 May 13 23:48:02.558495 ignition[998]: INFO : Stage: umount May 13 23:48:02.558495 ignition[998]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 23:48:02.558495 ignition[998]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 23:48:02.562451 ignition[998]: INFO : umount: umount passed May 13 23:48:02.562451 ignition[998]: INFO : Ignition finished successfully May 13 23:48:02.562110 systemd[1]: ignition-mount.service: Deactivated successfully. May 13 23:48:02.562253 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 13 23:48:02.563709 systemd[1]: Stopped target network.target - Network. May 13 23:48:02.565019 systemd[1]: ignition-disks.service: Deactivated successfully. May 13 23:48:02.565086 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 13 23:48:02.566593 systemd[1]: ignition-kargs.service: Deactivated successfully. May 13 23:48:02.566648 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 13 23:48:02.568500 systemd[1]: ignition-setup.service: Deactivated successfully. May 13 23:48:02.568544 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 13 23:48:02.570319 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 13 23:48:02.570361 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 13 23:48:02.572237 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 13 23:48:02.573811 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 13 23:48:02.582665 systemd[1]: systemd-resolved.service: Deactivated successfully. May 13 23:48:02.582794 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 13 23:48:02.587208 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 13 23:48:02.587691 systemd[1]: systemd-networkd.service: Deactivated successfully. May 13 23:48:02.587803 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 13 23:48:02.591518 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 13 23:48:02.592177 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 13 23:48:02.592245 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 13 23:48:02.594548 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 13 23:48:02.595348 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 13 23:48:02.595405 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 13 23:48:02.597432 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 13 23:48:02.597480 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 13 23:48:02.603852 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 13 23:48:02.603909 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 13 23:48:02.605812 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 13 23:48:02.605862 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 13 23:48:02.608722 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 13 23:48:02.613961 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 13 23:48:02.614030 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 13 23:48:02.617574 systemd[1]: sysroot-boot.service: Deactivated successfully. May 13 23:48:02.617689 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 13 23:48:02.622180 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 13 23:48:02.622294 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 13 23:48:02.632367 systemd[1]: systemd-udevd.service: Deactivated successfully. May 13 23:48:02.632608 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 13 23:48:02.634043 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 13 23:48:02.634086 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 13 23:48:02.635627 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 13 23:48:02.635669 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 13 23:48:02.637652 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 13 23:48:02.637707 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 13 23:48:02.640308 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 13 23:48:02.640364 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 13 23:48:02.643131 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 13 23:48:02.643181 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 23:48:02.646537 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 13 23:48:02.647545 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 13 23:48:02.647632 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 13 23:48:02.650481 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 13 23:48:02.650528 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 13 23:48:02.655061 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. May 13 23:48:02.655125 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 13 23:48:02.658785 systemd[1]: network-cleanup.service: Deactivated successfully. May 13 23:48:02.658878 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 13 23:48:02.665250 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 13 23:48:02.665356 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 13 23:48:02.666792 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 13 23:48:02.669307 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 13 23:48:02.691200 systemd[1]: Switching root. May 13 23:48:02.722513 systemd-journald[235]: Journal stopped May 13 23:48:03.623923 systemd-journald[235]: Received SIGTERM from PID 1 (systemd). May 13 23:48:03.623978 kernel: SELinux: policy capability network_peer_controls=1 May 13 23:48:03.623991 kernel: SELinux: policy capability open_perms=1 May 13 23:48:03.624001 kernel: SELinux: policy capability extended_socket_class=1 May 13 23:48:03.624010 kernel: SELinux: policy capability always_check_network=0 May 13 23:48:03.624023 kernel: SELinux: policy capability cgroup_seclabel=1 May 13 23:48:03.624037 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 13 23:48:03.624046 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 13 23:48:03.624055 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 13 23:48:03.624070 kernel: audit: type=1403 audit(1747180082.872:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 13 23:48:03.624081 systemd[1]: Successfully loaded SELinux policy in 33.538ms. May 13 23:48:03.624097 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 10.661ms. May 13 23:48:03.624109 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 13 23:48:03.624119 systemd[1]: Detected virtualization kvm. May 13 23:48:03.624131 systemd[1]: Detected architecture arm64. May 13 23:48:03.624142 systemd[1]: Detected first boot. May 13 23:48:03.624152 systemd[1]: Initializing machine ID from VM UUID. May 13 23:48:03.624163 zram_generator::config[1044]: No configuration found. May 13 23:48:03.624174 kernel: NET: Registered PF_VSOCK protocol family May 13 23:48:03.624184 systemd[1]: Populated /etc with preset unit settings. May 13 23:48:03.624195 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 13 23:48:03.624209 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 13 23:48:03.624222 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 13 23:48:03.624232 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 13 23:48:03.624243 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 13 23:48:03.624253 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 13 23:48:03.624264 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 13 23:48:03.624275 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 13 23:48:03.624285 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 13 23:48:03.624295 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 13 23:48:03.624309 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 13 23:48:03.624321 systemd[1]: Created slice user.slice - User and Session Slice. May 13 23:48:03.624332 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 13 23:48:03.624343 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 13 23:48:03.624354 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 13 23:48:03.624365 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 13 23:48:03.624375 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 13 23:48:03.624386 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 13 23:48:03.624396 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... May 13 23:48:03.624416 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 13 23:48:03.624427 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 13 23:48:03.624440 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 13 23:48:03.624451 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 13 23:48:03.624461 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 13 23:48:03.624471 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 13 23:48:03.624481 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 13 23:48:03.624492 systemd[1]: Reached target slices.target - Slice Units. May 13 23:48:03.624504 systemd[1]: Reached target swap.target - Swaps. May 13 23:48:03.624516 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 13 23:48:03.624527 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 13 23:48:03.624538 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 13 23:48:03.624549 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 13 23:48:03.624571 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 13 23:48:03.624582 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 13 23:48:03.624593 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 13 23:48:03.624604 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 13 23:48:03.624617 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 13 23:48:03.624627 systemd[1]: Mounting media.mount - External Media Directory... May 13 23:48:03.624638 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 13 23:48:03.624648 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 13 23:48:03.624673 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 13 23:48:03.624684 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 13 23:48:03.624695 systemd[1]: Reached target machines.target - Containers. May 13 23:48:03.624705 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 13 23:48:03.624717 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 13 23:48:03.624728 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 13 23:48:03.624740 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 13 23:48:03.624750 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 13 23:48:03.624760 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 13 23:48:03.624771 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 13 23:48:03.624781 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 13 23:48:03.624792 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 13 23:48:03.624802 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 13 23:48:03.624814 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 13 23:48:03.624825 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 13 23:48:03.624836 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 13 23:48:03.624847 systemd[1]: Stopped systemd-fsck-usr.service. May 13 23:48:03.624858 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 13 23:48:03.624869 systemd[1]: Starting systemd-journald.service - Journal Service... May 13 23:48:03.624879 kernel: fuse: init (API version 7.39) May 13 23:48:03.624888 kernel: loop: module loaded May 13 23:48:03.624898 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 13 23:48:03.624910 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 13 23:48:03.624922 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 13 23:48:03.624932 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 13 23:48:03.624942 kernel: ACPI: bus type drm_connector registered May 13 23:48:03.624952 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 13 23:48:03.624965 systemd[1]: verity-setup.service: Deactivated successfully. May 13 23:48:03.624976 systemd[1]: Stopped verity-setup.service. May 13 23:48:03.624987 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 13 23:48:03.624998 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 13 23:48:03.625009 systemd[1]: Mounted media.mount - External Media Directory. May 13 23:48:03.625019 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 13 23:48:03.625030 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 13 23:48:03.625040 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 13 23:48:03.625052 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 13 23:48:03.625083 systemd-journald[1105]: Collecting audit messages is disabled. May 13 23:48:03.625105 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 13 23:48:03.625117 systemd-journald[1105]: Journal started May 13 23:48:03.625137 systemd-journald[1105]: Runtime Journal (/run/log/journal/ebb2b7edf4454c2ba3b37d21f94109e3) is 5.9M, max 47.3M, 41.4M free. May 13 23:48:03.382805 systemd[1]: Queued start job for default target multi-user.target. May 13 23:48:03.398157 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 13 23:48:03.398620 systemd[1]: systemd-journald.service: Deactivated successfully. May 13 23:48:03.626568 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 13 23:48:03.629008 systemd[1]: Started systemd-journald.service - Journal Service. May 13 23:48:03.630029 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 13 23:48:03.631428 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 23:48:03.631653 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 13 23:48:03.632860 systemd[1]: modprobe@drm.service: Deactivated successfully. May 13 23:48:03.633031 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 13 23:48:03.634272 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 23:48:03.634464 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 13 23:48:03.635808 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 13 23:48:03.636002 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 13 23:48:03.637153 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 23:48:03.637330 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 13 23:48:03.638622 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 13 23:48:03.639891 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 13 23:48:03.641224 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 13 23:48:03.643601 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 13 23:48:03.658081 systemd[1]: Reached target network-pre.target - Preparation for Network. May 13 23:48:03.660933 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 13 23:48:03.663091 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 13 23:48:03.664051 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 13 23:48:03.664094 systemd[1]: Reached target local-fs.target - Local File Systems. May 13 23:48:03.666420 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 13 23:48:03.673530 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 13 23:48:03.675664 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 13 23:48:03.676685 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 23:48:03.679151 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 13 23:48:03.681281 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 13 23:48:03.682345 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 23:48:03.685741 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 13 23:48:03.686858 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 13 23:48:03.689385 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 13 23:48:03.691944 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 13 23:48:03.694418 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 13 23:48:03.694787 systemd-journald[1105]: Time spent on flushing to /var/log/journal/ebb2b7edf4454c2ba3b37d21f94109e3 is 11.968ms for 868 entries. May 13 23:48:03.694787 systemd-journald[1105]: System Journal (/var/log/journal/ebb2b7edf4454c2ba3b37d21f94109e3) is 8M, max 195.6M, 187.6M free. May 13 23:48:03.725280 systemd-journald[1105]: Received client request to flush runtime journal. May 13 23:48:03.710162 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 13 23:48:03.715519 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 13 23:48:03.717107 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 13 23:48:03.718835 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 13 23:48:03.725150 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 13 23:48:03.729102 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 13 23:48:03.730582 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 13 23:48:03.732815 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 13 23:48:03.733571 kernel: loop0: detected capacity change from 0 to 201592 May 13 23:48:03.737281 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 13 23:48:03.749465 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 13 23:48:03.750597 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 13 23:48:03.762998 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 13 23:48:03.765608 udevadm[1170]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. May 13 23:48:03.766898 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 13 23:48:03.779973 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 13 23:48:03.784666 kernel: loop1: detected capacity change from 0 to 103832 May 13 23:48:03.800660 systemd-tmpfiles[1180]: ACLs are not supported, ignoring. May 13 23:48:03.800685 systemd-tmpfiles[1180]: ACLs are not supported, ignoring. May 13 23:48:03.806623 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 13 23:48:03.819745 kernel: loop2: detected capacity change from 0 to 126448 May 13 23:48:03.845575 kernel: loop3: detected capacity change from 0 to 201592 May 13 23:48:03.858300 kernel: loop4: detected capacity change from 0 to 103832 May 13 23:48:03.863593 kernel: loop5: detected capacity change from 0 to 126448 May 13 23:48:03.871146 (sd-merge)[1186]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. May 13 23:48:03.871637 (sd-merge)[1186]: Merged extensions into '/usr'. May 13 23:48:03.878773 systemd[1]: Reload requested from client PID 1161 ('systemd-sysext') (unit systemd-sysext.service)... May 13 23:48:03.878792 systemd[1]: Reloading... May 13 23:48:03.929968 zram_generator::config[1214]: No configuration found. May 13 23:48:04.044701 ldconfig[1156]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 13 23:48:04.061118 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 23:48:04.124069 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 13 23:48:04.124224 systemd[1]: Reloading finished in 244 ms. May 13 23:48:04.145534 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 13 23:48:04.146783 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 13 23:48:04.163209 systemd[1]: Starting ensure-sysext.service... May 13 23:48:04.165096 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 13 23:48:04.178888 systemd[1]: Reload requested from client PID 1249 ('systemctl') (unit ensure-sysext.service)... May 13 23:48:04.178966 systemd[1]: Reloading... May 13 23:48:04.184824 systemd-tmpfiles[1251]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 13 23:48:04.185069 systemd-tmpfiles[1251]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 13 23:48:04.185818 systemd-tmpfiles[1251]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 13 23:48:04.186075 systemd-tmpfiles[1251]: ACLs are not supported, ignoring. May 13 23:48:04.186130 systemd-tmpfiles[1251]: ACLs are not supported, ignoring. May 13 23:48:04.190269 systemd-tmpfiles[1251]: Detected autofs mount point /boot during canonicalization of boot. May 13 23:48:04.190282 systemd-tmpfiles[1251]: Skipping /boot May 13 23:48:04.200136 systemd-tmpfiles[1251]: Detected autofs mount point /boot during canonicalization of boot. May 13 23:48:04.200158 systemd-tmpfiles[1251]: Skipping /boot May 13 23:48:04.232582 zram_generator::config[1277]: No configuration found. May 13 23:48:04.328772 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 23:48:04.392547 systemd[1]: Reloading finished in 213 ms. May 13 23:48:04.406725 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 13 23:48:04.423019 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 13 23:48:04.431699 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 13 23:48:04.434165 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 13 23:48:04.451110 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 13 23:48:04.455814 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 13 23:48:04.465871 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 13 23:48:04.470648 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 13 23:48:04.485595 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 13 23:48:04.490205 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 13 23:48:04.491843 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 13 23:48:04.494188 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 13 23:48:04.499765 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 13 23:48:04.501090 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 23:48:04.501325 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 13 23:48:04.509871 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 13 23:48:04.513428 systemd-udevd[1321]: Using default interface naming scheme 'v255'. May 13 23:48:04.516825 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 13 23:48:04.519202 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 23:48:04.519469 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 13 23:48:04.521147 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 23:48:04.521307 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 13 23:48:04.522874 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 23:48:04.523039 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 13 23:48:04.533159 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 13 23:48:04.538802 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 13 23:48:04.552525 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 13 23:48:04.552860 augenrules[1359]: No rules May 13 23:48:04.554967 systemd[1]: audit-rules.service: Deactivated successfully. May 13 23:48:04.555215 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 13 23:48:04.557304 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 13 23:48:04.570963 systemd[1]: Finished ensure-sysext.service. May 13 23:48:04.580765 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 13 23:48:04.581678 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 13 23:48:04.583884 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 13 23:48:04.585936 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 13 23:48:04.590794 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 13 23:48:04.604824 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 13 23:48:04.605732 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 23:48:04.605780 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 13 23:48:04.611807 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 13 23:48:04.615819 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 13 23:48:04.616766 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 13 23:48:04.617170 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 13 23:48:04.618494 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 23:48:04.620594 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 13 23:48:04.621900 systemd[1]: modprobe@drm.service: Deactivated successfully. May 13 23:48:04.622065 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 13 23:48:04.623332 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 23:48:04.623513 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 13 23:48:04.632583 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1365) May 13 23:48:04.633296 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 23:48:04.633490 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 13 23:48:04.644316 augenrules[1379]: /sbin/augenrules: No change May 13 23:48:04.646012 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. May 13 23:48:04.650564 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 23:48:04.650643 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 13 23:48:04.664404 augenrules[1416]: No rules May 13 23:48:04.666734 systemd[1]: audit-rules.service: Deactivated successfully. May 13 23:48:04.667009 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 13 23:48:04.726684 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 13 23:48:04.727885 systemd[1]: Reached target time-set.target - System Time Set. May 13 23:48:04.735613 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 13 23:48:04.736708 systemd-resolved[1320]: Positive Trust Anchors: May 13 23:48:04.736725 systemd-resolved[1320]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 13 23:48:04.736760 systemd-resolved[1320]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 13 23:48:04.744719 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 13 23:48:04.760007 systemd-resolved[1320]: Defaulting to hostname 'linux'. May 13 23:48:04.761742 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 13 23:48:04.763149 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 13 23:48:04.775747 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 13 23:48:04.778139 systemd-networkd[1387]: lo: Link UP May 13 23:48:04.778154 systemd-networkd[1387]: lo: Gained carrier May 13 23:48:04.791756 systemd-networkd[1387]: Enumeration completed May 13 23:48:04.792741 systemd-networkd[1387]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 23:48:04.792751 systemd-networkd[1387]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 13 23:48:04.793082 systemd[1]: Started systemd-networkd.service - Network Configuration. May 13 23:48:04.794351 systemd-networkd[1387]: eth0: Link UP May 13 23:48:04.794360 systemd-networkd[1387]: eth0: Gained carrier May 13 23:48:04.794376 systemd-networkd[1387]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 23:48:04.798545 systemd[1]: Reached target network.target - Network. May 13 23:48:04.801164 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 13 23:48:04.803605 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 13 23:48:04.807615 systemd-networkd[1387]: eth0: DHCPv4 address 10.0.0.82/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 13 23:48:04.807801 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 23:48:04.812204 systemd-timesyncd[1390]: Network configuration changed, trying to establish connection. May 13 23:48:04.813284 systemd-timesyncd[1390]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 13 23:48:04.813334 systemd-timesyncd[1390]: Initial clock synchronization to Tue 2025-05-13 23:48:04.426282 UTC. May 13 23:48:04.822672 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 13 23:48:04.827425 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 13 23:48:04.829804 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 13 23:48:04.856733 lvm[1436]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 13 23:48:04.866203 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 13 23:48:04.892603 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 13 23:48:04.894340 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 13 23:48:04.895586 systemd[1]: Reached target sysinit.target - System Initialization. May 13 23:48:04.896564 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 13 23:48:04.897689 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 13 23:48:04.898924 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 13 23:48:04.899957 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 13 23:48:04.901083 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 13 23:48:04.902163 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 13 23:48:04.902195 systemd[1]: Reached target paths.target - Path Units. May 13 23:48:04.903009 systemd[1]: Reached target timers.target - Timer Units. May 13 23:48:04.904975 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 13 23:48:04.907462 systemd[1]: Starting docker.socket - Docker Socket for the API... May 13 23:48:04.910957 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 13 23:48:04.912273 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 13 23:48:04.913362 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 13 23:48:04.916809 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 13 23:48:04.918401 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 13 23:48:04.920809 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 13 23:48:04.922546 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 13 23:48:04.923499 systemd[1]: Reached target sockets.target - Socket Units. May 13 23:48:04.924394 systemd[1]: Reached target basic.target - Basic System. May 13 23:48:04.925262 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 13 23:48:04.925297 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 13 23:48:04.926280 systemd[1]: Starting containerd.service - containerd container runtime... May 13 23:48:04.928113 lvm[1445]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 13 23:48:04.930461 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 13 23:48:04.932330 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 13 23:48:04.936716 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 13 23:48:04.939894 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 13 23:48:04.941119 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 13 23:48:04.945137 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 13 23:48:04.947320 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 13 23:48:04.950615 jq[1448]: false May 13 23:48:04.950763 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 13 23:48:04.954187 systemd[1]: Starting systemd-logind.service - User Login Management... May 13 23:48:04.956098 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 13 23:48:04.956607 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 13 23:48:04.957259 systemd[1]: Starting update-engine.service - Update Engine... May 13 23:48:04.962827 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 13 23:48:04.964898 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 13 23:48:04.971359 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 13 23:48:04.971602 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 13 23:48:04.971687 dbus-daemon[1447]: [system] SELinux support is enabled May 13 23:48:04.974369 extend-filesystems[1449]: Found loop3 May 13 23:48:04.974369 extend-filesystems[1449]: Found loop4 May 13 23:48:04.974369 extend-filesystems[1449]: Found loop5 May 13 23:48:04.974369 extend-filesystems[1449]: Found vda May 13 23:48:04.974369 extend-filesystems[1449]: Found vda1 May 13 23:48:04.974369 extend-filesystems[1449]: Found vda2 May 13 23:48:04.974369 extend-filesystems[1449]: Found vda3 May 13 23:48:04.974369 extend-filesystems[1449]: Found usr May 13 23:48:04.974369 extend-filesystems[1449]: Found vda4 May 13 23:48:04.974369 extend-filesystems[1449]: Found vda6 May 13 23:48:04.974369 extend-filesystems[1449]: Found vda7 May 13 23:48:04.974369 extend-filesystems[1449]: Found vda9 May 13 23:48:04.974369 extend-filesystems[1449]: Checking size of /dev/vda9 May 13 23:48:04.973740 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 13 23:48:04.997629 jq[1460]: true May 13 23:48:04.976794 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 13 23:48:04.976994 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 13 23:48:04.987915 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 13 23:48:04.987952 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 13 23:48:04.989131 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 13 23:48:04.989148 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 13 23:48:04.996455 systemd[1]: motdgen.service: Deactivated successfully. May 13 23:48:04.996704 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 13 23:48:05.008938 extend-filesystems[1449]: Resized partition /dev/vda9 May 13 23:48:05.010048 tar[1464]: linux-arm64/LICENSE May 13 23:48:05.010048 tar[1464]: linux-arm64/helm May 13 23:48:05.016613 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 13 23:48:05.016641 extend-filesystems[1480]: resize2fs 1.47.2 (1-Jan-2025) May 13 23:48:05.019604 (ntainerd)[1479]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 13 23:48:05.021357 update_engine[1459]: I20250513 23:48:05.019122 1459 main.cc:92] Flatcar Update Engine starting May 13 23:48:05.023557 systemd[1]: Started update-engine.service - Update Engine. May 13 23:48:05.023748 jq[1477]: true May 13 23:48:05.024722 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1352) May 13 23:48:05.027092 update_engine[1459]: I20250513 23:48:05.026916 1459 update_check_scheduler.cc:74] Next update check in 5m16s May 13 23:48:05.038786 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 13 23:48:05.051428 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 13 23:48:05.081212 systemd-logind[1457]: Watching system buttons on /dev/input/event0 (Power Button) May 13 23:48:05.082319 extend-filesystems[1480]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 13 23:48:05.082319 extend-filesystems[1480]: old_desc_blocks = 1, new_desc_blocks = 1 May 13 23:48:05.082319 extend-filesystems[1480]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 13 23:48:05.097433 extend-filesystems[1449]: Resized filesystem in /dev/vda9 May 13 23:48:05.083282 systemd[1]: extend-filesystems.service: Deactivated successfully. May 13 23:48:05.083468 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 13 23:48:05.084145 systemd-logind[1457]: New seat seat0. May 13 23:48:05.092382 systemd[1]: Started systemd-logind.service - User Login Management. May 13 23:48:05.127579 bash[1503]: Updated "/home/core/.ssh/authorized_keys" May 13 23:48:05.129432 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 13 23:48:05.131868 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 13 23:48:05.147288 locksmithd[1486]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 13 23:48:05.254979 containerd[1479]: time="2025-05-13T23:48:05Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 May 13 23:48:05.256229 containerd[1479]: time="2025-05-13T23:48:05.256174405Z" level=info msg="starting containerd" revision=88aa2f531d6c2922003cc7929e51daf1c14caa0a version=v2.0.1 May 13 23:48:05.265976 containerd[1479]: time="2025-05-13T23:48:05.265855126Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="7.004µs" May 13 23:48:05.265976 containerd[1479]: time="2025-05-13T23:48:05.265899815Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 May 13 23:48:05.265976 containerd[1479]: time="2025-05-13T23:48:05.265921208Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 May 13 23:48:05.266568 containerd[1479]: time="2025-05-13T23:48:05.266483168Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 May 13 23:48:05.266661 containerd[1479]: time="2025-05-13T23:48:05.266641672Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 May 13 23:48:05.266787 containerd[1479]: time="2025-05-13T23:48:05.266770676Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 13 23:48:05.266968 containerd[1479]: time="2025-05-13T23:48:05.266948251Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 13 23:48:05.267042 containerd[1479]: time="2025-05-13T23:48:05.267027922Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 13 23:48:05.267588 containerd[1479]: time="2025-05-13T23:48:05.267535410Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 13 23:48:05.267588 containerd[1479]: time="2025-05-13T23:48:05.267578881Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 13 23:48:05.267661 containerd[1479]: time="2025-05-13T23:48:05.267591899Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 13 23:48:05.267661 containerd[1479]: time="2025-05-13T23:48:05.267651624Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 May 13 23:48:05.267821 containerd[1479]: time="2025-05-13T23:48:05.267740621Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 May 13 23:48:05.268154 containerd[1479]: time="2025-05-13T23:48:05.268120285Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 13 23:48:05.268188 containerd[1479]: time="2025-05-13T23:48:05.268173349Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 13 23:48:05.268219 containerd[1479]: time="2025-05-13T23:48:05.268188613Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 May 13 23:48:05.268243 containerd[1479]: time="2025-05-13T23:48:05.268223214Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 May 13 23:48:05.268583 containerd[1479]: time="2025-05-13T23:48:05.268563138Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 May 13 23:48:05.268720 containerd[1479]: time="2025-05-13T23:48:05.268691229Z" level=info msg="metadata content store policy set" policy=shared May 13 23:48:05.275567 containerd[1479]: time="2025-05-13T23:48:05.272600319Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 May 13 23:48:05.275567 containerd[1479]: time="2025-05-13T23:48:05.272666400Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 May 13 23:48:05.275567 containerd[1479]: time="2025-05-13T23:48:05.272681094Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 May 13 23:48:05.275567 containerd[1479]: time="2025-05-13T23:48:05.272693769Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 May 13 23:48:05.275567 containerd[1479]: time="2025-05-13T23:48:05.272706065Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 May 13 23:48:05.275567 containerd[1479]: time="2025-05-13T23:48:05.272717713Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 May 13 23:48:05.275567 containerd[1479]: time="2025-05-13T23:48:05.272730921Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 May 13 23:48:05.275567 containerd[1479]: time="2025-05-13T23:48:05.272742722Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 May 13 23:48:05.275567 containerd[1479]: time="2025-05-13T23:48:05.272753684Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 May 13 23:48:05.275567 containerd[1479]: time="2025-05-13T23:48:05.272764114Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 May 13 23:48:05.275567 containerd[1479]: time="2025-05-13T23:48:05.272774354Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 May 13 23:48:05.275567 containerd[1479]: time="2025-05-13T23:48:05.272787487Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 May 13 23:48:05.275567 containerd[1479]: time="2025-05-13T23:48:05.272924408Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 May 13 23:48:05.275567 containerd[1479]: time="2025-05-13T23:48:05.272944240Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 May 13 23:48:05.275856 containerd[1479]: time="2025-05-13T23:48:05.272959771Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 May 13 23:48:05.275856 containerd[1479]: time="2025-05-13T23:48:05.272970733Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 May 13 23:48:05.275856 containerd[1479]: time="2025-05-13T23:48:05.272979945Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 May 13 23:48:05.275856 containerd[1479]: time="2025-05-13T23:48:05.272992202Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 May 13 23:48:05.275856 containerd[1479]: time="2025-05-13T23:48:05.273004003Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 May 13 23:48:05.275856 containerd[1479]: time="2025-05-13T23:48:05.273014128Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 May 13 23:48:05.275856 containerd[1479]: time="2025-05-13T23:48:05.273024634Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 May 13 23:48:05.275856 containerd[1479]: time="2025-05-13T23:48:05.273034645Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 May 13 23:48:05.275856 containerd[1479]: time="2025-05-13T23:48:05.273046408Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 May 13 23:48:05.275856 containerd[1479]: time="2025-05-13T23:48:05.273302474Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" May 13 23:48:05.275856 containerd[1479]: time="2025-05-13T23:48:05.273317281Z" level=info msg="Start snapshots syncer" May 13 23:48:05.275856 containerd[1479]: time="2025-05-13T23:48:05.273332774Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 May 13 23:48:05.276042 containerd[1479]: time="2025-05-13T23:48:05.273584310Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" May 13 23:48:05.276042 containerd[1479]: time="2025-05-13T23:48:05.273629912Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 May 13 23:48:05.276137 containerd[1479]: time="2025-05-13T23:48:05.273704863Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 May 13 23:48:05.276137 containerd[1479]: time="2025-05-13T23:48:05.273826596Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 May 13 23:48:05.276137 containerd[1479]: time="2025-05-13T23:48:05.273850083Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 May 13 23:48:05.276137 containerd[1479]: time="2025-05-13T23:48:05.273863558Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 May 13 23:48:05.276137 containerd[1479]: time="2025-05-13T23:48:05.273874483Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 May 13 23:48:05.276137 containerd[1479]: time="2025-05-13T23:48:05.273894048Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 May 13 23:48:05.276137 containerd[1479]: time="2025-05-13T23:48:05.273904821Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 May 13 23:48:05.276137 containerd[1479]: time="2025-05-13T23:48:05.273915365Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 May 13 23:48:05.276137 containerd[1479]: time="2025-05-13T23:48:05.273940184Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 May 13 23:48:05.276137 containerd[1479]: time="2025-05-13T23:48:05.273952974Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 May 13 23:48:05.276137 containerd[1479]: time="2025-05-13T23:48:05.273962376Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 May 13 23:48:05.276137 containerd[1479]: time="2025-05-13T23:48:05.273984834Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 13 23:48:05.276137 containerd[1479]: time="2025-05-13T23:48:05.273997358Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 13 23:48:05.276137 containerd[1479]: time="2025-05-13T23:48:05.274006303Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 13 23:48:05.276348 containerd[1479]: time="2025-05-13T23:48:05.274015820Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 13 23:48:05.276348 containerd[1479]: time="2025-05-13T23:48:05.274023737Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 May 13 23:48:05.276348 containerd[1479]: time="2025-05-13T23:48:05.274032492Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 May 13 23:48:05.276348 containerd[1479]: time="2025-05-13T23:48:05.274043455Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 May 13 23:48:05.276348 containerd[1479]: time="2025-05-13T23:48:05.274116807Z" level=info msg="runtime interface created" May 13 23:48:05.276348 containerd[1479]: time="2025-05-13T23:48:05.274122136Z" level=info msg="created NRI interface" May 13 23:48:05.276348 containerd[1479]: time="2025-05-13T23:48:05.274131158Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 May 13 23:48:05.276348 containerd[1479]: time="2025-05-13T23:48:05.274143377Z" level=info msg="Connect containerd service" May 13 23:48:05.276348 containerd[1479]: time="2025-05-13T23:48:05.274168843Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 13 23:48:05.276348 containerd[1479]: time="2025-05-13T23:48:05.274783066Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 13 23:48:05.402581 containerd[1479]: time="2025-05-13T23:48:05.402451479Z" level=info msg="Start subscribing containerd event" May 13 23:48:05.402722 containerd[1479]: time="2025-05-13T23:48:05.402706022Z" level=info msg="Start recovering state" May 13 23:48:05.402861 containerd[1479]: time="2025-05-13T23:48:05.402846294Z" level=info msg="Start event monitor" May 13 23:48:05.402942 containerd[1479]: time="2025-05-13T23:48:05.402919798Z" level=info msg="Start cni network conf syncer for default" May 13 23:48:05.403039 containerd[1479]: time="2025-05-13T23:48:05.403027447Z" level=info msg="Start streaming server" May 13 23:48:05.403151 containerd[1479]: time="2025-05-13T23:48:05.403137875Z" level=info msg="Registered namespace \"k8s.io\" with NRI" May 13 23:48:05.403290 containerd[1479]: time="2025-05-13T23:48:05.403277232Z" level=info msg="runtime interface starting up..." May 13 23:48:05.403369 containerd[1479]: time="2025-05-13T23:48:05.403358540Z" level=info msg="starting plugins..." May 13 23:48:05.403640 containerd[1479]: time="2025-05-13T23:48:05.402487222Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 13 23:48:05.403640 containerd[1479]: time="2025-05-13T23:48:05.403530139Z" level=info msg=serving... address=/run/containerd/containerd.sock May 13 23:48:05.403745 containerd[1479]: time="2025-05-13T23:48:05.403729640Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" May 13 23:48:05.403991 containerd[1479]: time="2025-05-13T23:48:05.403975809Z" level=info msg="containerd successfully booted in 0.149341s" May 13 23:48:05.404086 systemd[1]: Started containerd.service - containerd container runtime. May 13 23:48:05.445187 tar[1464]: linux-arm64/README.md May 13 23:48:05.463968 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 13 23:48:05.616339 sshd_keygen[1481]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 13 23:48:05.636472 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 13 23:48:05.639780 systemd[1]: Starting issuegen.service - Generate /run/issue... May 13 23:48:05.664301 systemd[1]: issuegen.service: Deactivated successfully. May 13 23:48:05.664516 systemd[1]: Finished issuegen.service - Generate /run/issue. May 13 23:48:05.667277 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 13 23:48:05.690608 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 13 23:48:05.693711 systemd[1]: Started getty@tty1.service - Getty on tty1. May 13 23:48:05.695795 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. May 13 23:48:05.697067 systemd[1]: Reached target getty.target - Login Prompts. May 13 23:48:05.910690 systemd-networkd[1387]: eth0: Gained IPv6LL May 13 23:48:05.913098 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 13 23:48:05.915908 systemd[1]: Reached target network-online.target - Network is Online. May 13 23:48:05.918364 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... May 13 23:48:05.920780 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:48:05.935340 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 13 23:48:05.950008 systemd[1]: coreos-metadata.service: Deactivated successfully. May 13 23:48:05.950227 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. May 13 23:48:05.952205 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 13 23:48:05.960620 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 13 23:48:06.524610 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:48:06.526131 systemd[1]: Reached target multi-user.target - Multi-User System. May 13 23:48:06.527867 systemd[1]: Startup finished in 560ms (kernel) + 5.165s (initrd) + 3.690s (userspace) = 9.416s. May 13 23:48:06.528792 (kubelet)[1576]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 23:48:07.065986 kubelet[1576]: E0513 23:48:07.065911 1576 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 23:48:07.068294 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 23:48:07.068445 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 23:48:07.068874 systemd[1]: kubelet.service: Consumed 907ms CPU time, 251.5M memory peak. May 13 23:48:10.200499 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 13 23:48:10.204969 systemd[1]: Started sshd@0-10.0.0.82:22-10.0.0.1:52620.service - OpenSSH per-connection server daemon (10.0.0.1:52620). May 13 23:48:10.287090 sshd[1590]: Accepted publickey for core from 10.0.0.1 port 52620 ssh2: RSA SHA256:OJP9RQeqgGpOjAZaZzevsTVvmgqdZ2yoHQkAtvY14+M May 13 23:48:10.289777 sshd-session[1590]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:48:10.302729 systemd-logind[1457]: New session 1 of user core. May 13 23:48:10.303774 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 13 23:48:10.304933 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 13 23:48:10.335114 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 13 23:48:10.343895 systemd[1]: Starting user@500.service - User Manager for UID 500... May 13 23:48:10.355943 (systemd)[1594]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 13 23:48:10.359171 systemd-logind[1457]: New session c1 of user core. May 13 23:48:10.496121 systemd[1594]: Queued start job for default target default.target. May 13 23:48:10.510546 systemd[1594]: Created slice app.slice - User Application Slice. May 13 23:48:10.510585 systemd[1594]: Reached target paths.target - Paths. May 13 23:48:10.510623 systemd[1594]: Reached target timers.target - Timers. May 13 23:48:10.511914 systemd[1594]: Starting dbus.socket - D-Bus User Message Bus Socket... May 13 23:48:10.521541 systemd[1594]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 13 23:48:10.521641 systemd[1594]: Reached target sockets.target - Sockets. May 13 23:48:10.521689 systemd[1594]: Reached target basic.target - Basic System. May 13 23:48:10.521718 systemd[1594]: Reached target default.target - Main User Target. May 13 23:48:10.521749 systemd[1594]: Startup finished in 156ms. May 13 23:48:10.522009 systemd[1]: Started user@500.service - User Manager for UID 500. May 13 23:48:10.523873 systemd[1]: Started session-1.scope - Session 1 of User core. May 13 23:48:10.589391 systemd[1]: Started sshd@1-10.0.0.82:22-10.0.0.1:52636.service - OpenSSH per-connection server daemon (10.0.0.1:52636). May 13 23:48:10.642178 sshd[1605]: Accepted publickey for core from 10.0.0.1 port 52636 ssh2: RSA SHA256:OJP9RQeqgGpOjAZaZzevsTVvmgqdZ2yoHQkAtvY14+M May 13 23:48:10.643476 sshd-session[1605]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:48:10.647179 systemd-logind[1457]: New session 2 of user core. May 13 23:48:10.668777 systemd[1]: Started session-2.scope - Session 2 of User core. May 13 23:48:10.718414 sshd[1607]: Connection closed by 10.0.0.1 port 52636 May 13 23:48:10.718833 sshd-session[1605]: pam_unix(sshd:session): session closed for user core May 13 23:48:10.734410 systemd[1]: sshd@1-10.0.0.82:22-10.0.0.1:52636.service: Deactivated successfully. May 13 23:48:10.738441 systemd[1]: session-2.scope: Deactivated successfully. May 13 23:48:10.740157 systemd-logind[1457]: Session 2 logged out. Waiting for processes to exit. May 13 23:48:10.742031 systemd[1]: Started sshd@2-10.0.0.82:22-10.0.0.1:52638.service - OpenSSH per-connection server daemon (10.0.0.1:52638). May 13 23:48:10.742797 systemd-logind[1457]: Removed session 2. May 13 23:48:10.796484 sshd[1612]: Accepted publickey for core from 10.0.0.1 port 52638 ssh2: RSA SHA256:OJP9RQeqgGpOjAZaZzevsTVvmgqdZ2yoHQkAtvY14+M May 13 23:48:10.797820 sshd-session[1612]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:48:10.802615 systemd-logind[1457]: New session 3 of user core. May 13 23:48:10.818783 systemd[1]: Started session-3.scope - Session 3 of User core. May 13 23:48:10.871669 sshd[1615]: Connection closed by 10.0.0.1 port 52638 May 13 23:48:10.872216 sshd-session[1612]: pam_unix(sshd:session): session closed for user core May 13 23:48:10.883131 systemd[1]: sshd@2-10.0.0.82:22-10.0.0.1:52638.service: Deactivated successfully. May 13 23:48:10.884592 systemd[1]: session-3.scope: Deactivated successfully. May 13 23:48:10.885604 systemd-logind[1457]: Session 3 logged out. Waiting for processes to exit. May 13 23:48:10.886840 systemd[1]: Started sshd@3-10.0.0.82:22-10.0.0.1:52642.service - OpenSSH per-connection server daemon (10.0.0.1:52642). May 13 23:48:10.887586 systemd-logind[1457]: Removed session 3. May 13 23:48:10.938168 sshd[1620]: Accepted publickey for core from 10.0.0.1 port 52642 ssh2: RSA SHA256:OJP9RQeqgGpOjAZaZzevsTVvmgqdZ2yoHQkAtvY14+M May 13 23:48:10.939609 sshd-session[1620]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:48:10.943609 systemd-logind[1457]: New session 4 of user core. May 13 23:48:10.950742 systemd[1]: Started session-4.scope - Session 4 of User core. May 13 23:48:11.001055 sshd[1623]: Connection closed by 10.0.0.1 port 52642 May 13 23:48:11.001653 sshd-session[1620]: pam_unix(sshd:session): session closed for user core May 13 23:48:11.013823 systemd[1]: sshd@3-10.0.0.82:22-10.0.0.1:52642.service: Deactivated successfully. May 13 23:48:11.015428 systemd[1]: session-4.scope: Deactivated successfully. May 13 23:48:11.016175 systemd-logind[1457]: Session 4 logged out. Waiting for processes to exit. May 13 23:48:11.018112 systemd[1]: Started sshd@4-10.0.0.82:22-10.0.0.1:52658.service - OpenSSH per-connection server daemon (10.0.0.1:52658). May 13 23:48:11.018996 systemd-logind[1457]: Removed session 4. May 13 23:48:11.069106 sshd[1628]: Accepted publickey for core from 10.0.0.1 port 52658 ssh2: RSA SHA256:OJP9RQeqgGpOjAZaZzevsTVvmgqdZ2yoHQkAtvY14+M May 13 23:48:11.071095 sshd-session[1628]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:48:11.076121 systemd-logind[1457]: New session 5 of user core. May 13 23:48:11.091771 systemd[1]: Started session-5.scope - Session 5 of User core. May 13 23:48:11.161394 sudo[1632]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 13 23:48:11.161739 sudo[1632]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 23:48:11.176479 sudo[1632]: pam_unix(sudo:session): session closed for user root May 13 23:48:11.178359 sshd[1631]: Connection closed by 10.0.0.1 port 52658 May 13 23:48:11.179103 sshd-session[1628]: pam_unix(sshd:session): session closed for user core May 13 23:48:11.205919 systemd[1]: sshd@4-10.0.0.82:22-10.0.0.1:52658.service: Deactivated successfully. May 13 23:48:11.208829 systemd[1]: session-5.scope: Deactivated successfully. May 13 23:48:11.209664 systemd-logind[1457]: Session 5 logged out. Waiting for processes to exit. May 13 23:48:11.211414 systemd[1]: Started sshd@5-10.0.0.82:22-10.0.0.1:52668.service - OpenSSH per-connection server daemon (10.0.0.1:52668). May 13 23:48:11.212278 systemd-logind[1457]: Removed session 5. May 13 23:48:11.281706 sshd[1637]: Accepted publickey for core from 10.0.0.1 port 52668 ssh2: RSA SHA256:OJP9RQeqgGpOjAZaZzevsTVvmgqdZ2yoHQkAtvY14+M May 13 23:48:11.283037 sshd-session[1637]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:48:11.287631 systemd-logind[1457]: New session 6 of user core. May 13 23:48:11.304814 systemd[1]: Started session-6.scope - Session 6 of User core. May 13 23:48:11.364542 sudo[1642]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 13 23:48:11.364829 sudo[1642]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 23:48:11.368412 sudo[1642]: pam_unix(sudo:session): session closed for user root May 13 23:48:11.373643 sudo[1641]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 13 23:48:11.374102 sudo[1641]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 23:48:11.383017 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 13 23:48:11.455640 augenrules[1664]: No rules May 13 23:48:11.457052 systemd[1]: audit-rules.service: Deactivated successfully. May 13 23:48:11.457278 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 13 23:48:11.458641 sudo[1641]: pam_unix(sudo:session): session closed for user root May 13 23:48:11.461580 sshd[1640]: Connection closed by 10.0.0.1 port 52668 May 13 23:48:11.462050 sshd-session[1637]: pam_unix(sshd:session): session closed for user core May 13 23:48:11.473672 systemd[1]: sshd@5-10.0.0.82:22-10.0.0.1:52668.service: Deactivated successfully. May 13 23:48:11.475383 systemd[1]: session-6.scope: Deactivated successfully. May 13 23:48:11.476088 systemd-logind[1457]: Session 6 logged out. Waiting for processes to exit. May 13 23:48:11.477906 systemd[1]: Started sshd@6-10.0.0.82:22-10.0.0.1:52672.service - OpenSSH per-connection server daemon (10.0.0.1:52672). May 13 23:48:11.478772 systemd-logind[1457]: Removed session 6. May 13 23:48:11.534041 sshd[1672]: Accepted publickey for core from 10.0.0.1 port 52672 ssh2: RSA SHA256:OJP9RQeqgGpOjAZaZzevsTVvmgqdZ2yoHQkAtvY14+M May 13 23:48:11.535319 sshd-session[1672]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:48:11.540165 systemd-logind[1457]: New session 7 of user core. May 13 23:48:11.552735 systemd[1]: Started session-7.scope - Session 7 of User core. May 13 23:48:11.604320 sudo[1676]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 13 23:48:11.604623 sudo[1676]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 23:48:11.984892 systemd[1]: Starting docker.service - Docker Application Container Engine... May 13 23:48:11.996892 (dockerd)[1696]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 13 23:48:12.279712 dockerd[1696]: time="2025-05-13T23:48:12.279596780Z" level=info msg="Starting up" May 13 23:48:12.281342 dockerd[1696]: time="2025-05-13T23:48:12.281164783Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" May 13 23:48:12.431109 dockerd[1696]: time="2025-05-13T23:48:12.430924205Z" level=info msg="Loading containers: start." May 13 23:48:12.608578 kernel: Initializing XFRM netlink socket May 13 23:48:12.673371 systemd-networkd[1387]: docker0: Link UP May 13 23:48:12.738602 dockerd[1696]: time="2025-05-13T23:48:12.738072305Z" level=info msg="Loading containers: done." May 13 23:48:12.752548 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2942945438-merged.mount: Deactivated successfully. May 13 23:48:12.764264 dockerd[1696]: time="2025-05-13T23:48:12.764208620Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 13 23:48:12.764401 dockerd[1696]: time="2025-05-13T23:48:12.764304798Z" level=info msg="Docker daemon" commit=c710b88579fcb5e0d53f96dcae976d79323b9166 containerd-snapshotter=false storage-driver=overlay2 version=27.4.1 May 13 23:48:12.764535 dockerd[1696]: time="2025-05-13T23:48:12.764499077Z" level=info msg="Daemon has completed initialization" May 13 23:48:12.800702 dockerd[1696]: time="2025-05-13T23:48:12.800625187Z" level=info msg="API listen on /run/docker.sock" May 13 23:48:12.801066 systemd[1]: Started docker.service - Docker Application Container Engine. May 13 23:48:13.454074 containerd[1479]: time="2025-05-13T23:48:13.454015909Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.4\"" May 13 23:48:14.077458 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount522429670.mount: Deactivated successfully. May 13 23:48:14.959052 containerd[1479]: time="2025-05-13T23:48:14.958994488Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:48:14.961239 containerd[1479]: time="2025-05-13T23:48:14.961180866Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.4: active requests=0, bytes read=26233120" May 13 23:48:14.966039 containerd[1479]: time="2025-05-13T23:48:14.965983874Z" level=info msg="ImageCreate event name:\"sha256:ab579d62aa850c7d0eca948aad11fcf813743e3b6c9742241c32cb4f1638968b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:48:14.968297 containerd[1479]: time="2025-05-13T23:48:14.968236279Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:631c6cc78b2862be4fed7df3384a643ef7297eebadae22e8ef9cbe2e19b6386f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:48:14.969602 containerd[1479]: time="2025-05-13T23:48:14.969370266Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.4\" with image id \"sha256:ab579d62aa850c7d0eca948aad11fcf813743e3b6c9742241c32cb4f1638968b\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:631c6cc78b2862be4fed7df3384a643ef7297eebadae22e8ef9cbe2e19b6386f\", size \"26229918\" in 1.515307725s" May 13 23:48:14.969602 containerd[1479]: time="2025-05-13T23:48:14.969402707Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.4\" returns image reference \"sha256:ab579d62aa850c7d0eca948aad11fcf813743e3b6c9742241c32cb4f1638968b\"" May 13 23:48:14.970019 containerd[1479]: time="2025-05-13T23:48:14.969992721Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.4\"" May 13 23:48:15.984107 containerd[1479]: time="2025-05-13T23:48:15.984053266Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:48:15.984565 containerd[1479]: time="2025-05-13T23:48:15.984506663Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.4: active requests=0, bytes read=22529573" May 13 23:48:15.985330 containerd[1479]: time="2025-05-13T23:48:15.985305922Z" level=info msg="ImageCreate event name:\"sha256:79534fade29d07745acc698bbf598b0604a9ea1fd7917822c816a74fc0b55965\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:48:15.987843 containerd[1479]: time="2025-05-13T23:48:15.987812933Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:25e29187ea66f0ff9b9a00114849c3a30b649005c900a8b2a69e3f3fa56448fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:48:15.988918 containerd[1479]: time="2025-05-13T23:48:15.988859525Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.4\" with image id \"sha256:79534fade29d07745acc698bbf598b0604a9ea1fd7917822c816a74fc0b55965\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:25e29187ea66f0ff9b9a00114849c3a30b649005c900a8b2a69e3f3fa56448fb\", size \"23971132\" in 1.018835491s" May 13 23:48:15.988918 containerd[1479]: time="2025-05-13T23:48:15.988897911Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.4\" returns image reference \"sha256:79534fade29d07745acc698bbf598b0604a9ea1fd7917822c816a74fc0b55965\"" May 13 23:48:15.989324 containerd[1479]: time="2025-05-13T23:48:15.989300206Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.4\"" May 13 23:48:17.007221 containerd[1479]: time="2025-05-13T23:48:17.007164640Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:48:17.007960 containerd[1479]: time="2025-05-13T23:48:17.007896323Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.4: active requests=0, bytes read=17482175" May 13 23:48:17.008637 containerd[1479]: time="2025-05-13T23:48:17.008600713Z" level=info msg="ImageCreate event name:\"sha256:730fbc2590716b8202fcdd928a813b847575ebf03911a059979257cd6cbb8245\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:48:17.011143 containerd[1479]: time="2025-05-13T23:48:17.011115415Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:09c55f8dac59a4b8e5e354140f5a4bdd6fa9bd95c42d6bcba6782ed37c31b5a2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:48:17.012191 containerd[1479]: time="2025-05-13T23:48:17.012167189Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.4\" with image id \"sha256:730fbc2590716b8202fcdd928a813b847575ebf03911a059979257cd6cbb8245\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:09c55f8dac59a4b8e5e354140f5a4bdd6fa9bd95c42d6bcba6782ed37c31b5a2\", size \"18923752\" in 1.022835443s" May 13 23:48:17.012237 containerd[1479]: time="2025-05-13T23:48:17.012197610Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.4\" returns image reference \"sha256:730fbc2590716b8202fcdd928a813b847575ebf03911a059979257cd6cbb8245\"" May 13 23:48:17.012857 containerd[1479]: time="2025-05-13T23:48:17.012785665Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\"" May 13 23:48:17.318811 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 13 23:48:17.320500 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:48:17.447688 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:48:17.451436 (kubelet)[1970]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 23:48:17.486922 kubelet[1970]: E0513 23:48:17.486865 1970 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 23:48:17.490012 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 23:48:17.490157 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 23:48:17.490456 systemd[1]: kubelet.service: Consumed 144ms CPU time, 104.1M memory peak. May 13 23:48:18.006606 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3347639589.mount: Deactivated successfully. May 13 23:48:18.337384 containerd[1479]: time="2025-05-13T23:48:18.337249815Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:48:18.338247 containerd[1479]: time="2025-05-13T23:48:18.338189496Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.4: active requests=0, bytes read=27370353" May 13 23:48:18.339219 containerd[1479]: time="2025-05-13T23:48:18.339181250Z" level=info msg="ImageCreate event name:\"sha256:62c496efa595c8eb7d098e43430b2b94ad66812214759a7ea9daaaa1ed901fc7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:48:18.341184 containerd[1479]: time="2025-05-13T23:48:18.341127677Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:48:18.341743 containerd[1479]: time="2025-05-13T23:48:18.341712727Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.4\" with image id \"sha256:62c496efa595c8eb7d098e43430b2b94ad66812214759a7ea9daaaa1ed901fc7\", repo tag \"registry.k8s.io/kube-proxy:v1.32.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36\", size \"27369370\" in 1.328892351s" May 13 23:48:18.341812 containerd[1479]: time="2025-05-13T23:48:18.341747032Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\" returns image reference \"sha256:62c496efa595c8eb7d098e43430b2b94ad66812214759a7ea9daaaa1ed901fc7\"" May 13 23:48:18.342236 containerd[1479]: time="2025-05-13T23:48:18.342211876Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" May 13 23:48:18.900533 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2533023646.mount: Deactivated successfully. May 13 23:48:19.638623 containerd[1479]: time="2025-05-13T23:48:19.638416053Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:48:19.639538 containerd[1479]: time="2025-05-13T23:48:19.639318036Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951624" May 13 23:48:19.640869 containerd[1479]: time="2025-05-13T23:48:19.640835354Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:48:19.644094 containerd[1479]: time="2025-05-13T23:48:19.644057700Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:48:19.645465 containerd[1479]: time="2025-05-13T23:48:19.645304650Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.303058618s" May 13 23:48:19.645465 containerd[1479]: time="2025-05-13T23:48:19.645338476Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" May 13 23:48:19.645917 containerd[1479]: time="2025-05-13T23:48:19.645777894Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 13 23:48:20.142539 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount783588704.mount: Deactivated successfully. May 13 23:48:20.151662 containerd[1479]: time="2025-05-13T23:48:20.151602498Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 23:48:20.152988 containerd[1479]: time="2025-05-13T23:48:20.152931170Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" May 13 23:48:20.153747 containerd[1479]: time="2025-05-13T23:48:20.153701669Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 23:48:20.156566 containerd[1479]: time="2025-05-13T23:48:20.156406660Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 23:48:20.157917 containerd[1479]: time="2025-05-13T23:48:20.157877717Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 512.062574ms" May 13 23:48:20.158157 containerd[1479]: time="2025-05-13T23:48:20.158020181Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" May 13 23:48:20.158747 containerd[1479]: time="2025-05-13T23:48:20.158660336Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" May 13 23:48:20.725887 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount113118798.mount: Deactivated successfully. May 13 23:48:22.282780 containerd[1479]: time="2025-05-13T23:48:22.279984221Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:48:22.283497 containerd[1479]: time="2025-05-13T23:48:22.283163775Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67812471" May 13 23:48:22.284648 containerd[1479]: time="2025-05-13T23:48:22.284611028Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:48:22.287586 containerd[1479]: time="2025-05-13T23:48:22.287260961Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:48:22.289415 containerd[1479]: time="2025-05-13T23:48:22.289374189Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 2.13068177s" May 13 23:48:22.289468 containerd[1479]: time="2025-05-13T23:48:22.289416735Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" May 13 23:48:26.544885 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:48:26.545046 systemd[1]: kubelet.service: Consumed 144ms CPU time, 104.1M memory peak. May 13 23:48:26.547196 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:48:26.570285 systemd[1]: Reload requested from client PID 2128 ('systemctl') (unit session-7.scope)... May 13 23:48:26.570300 systemd[1]: Reloading... May 13 23:48:26.643586 zram_generator::config[2172]: No configuration found. May 13 23:48:26.747544 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 23:48:26.839107 systemd[1]: Reloading finished in 268 ms. May 13 23:48:26.899689 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:48:26.902379 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:48:26.904207 systemd[1]: kubelet.service: Deactivated successfully. May 13 23:48:26.904489 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:48:26.904535 systemd[1]: kubelet.service: Consumed 97ms CPU time, 90.2M memory peak. May 13 23:48:26.906270 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:48:27.024859 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:48:27.030302 (kubelet)[2220]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 13 23:48:27.083713 kubelet[2220]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 23:48:27.083713 kubelet[2220]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 13 23:48:27.083713 kubelet[2220]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 23:48:27.084166 kubelet[2220]: I0513 23:48:27.083857 2220 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 13 23:48:27.884583 kubelet[2220]: I0513 23:48:27.884528 2220 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" May 13 23:48:27.884583 kubelet[2220]: I0513 23:48:27.884576 2220 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 13 23:48:27.884881 kubelet[2220]: I0513 23:48:27.884854 2220 server.go:954] "Client rotation is on, will bootstrap in background" May 13 23:48:27.914571 kubelet[2220]: E0513 23:48:27.914500 2220 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.82:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.82:6443: connect: connection refused" logger="UnhandledError" May 13 23:48:27.916335 kubelet[2220]: I0513 23:48:27.916302 2220 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 13 23:48:27.928356 kubelet[2220]: I0513 23:48:27.928323 2220 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 13 23:48:27.931387 kubelet[2220]: I0513 23:48:27.931325 2220 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 13 23:48:27.932744 kubelet[2220]: I0513 23:48:27.932683 2220 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 13 23:48:27.932938 kubelet[2220]: I0513 23:48:27.932739 2220 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 13 23:48:27.933037 kubelet[2220]: I0513 23:48:27.932996 2220 topology_manager.go:138] "Creating topology manager with none policy" May 13 23:48:27.933037 kubelet[2220]: I0513 23:48:27.933005 2220 container_manager_linux.go:304] "Creating device plugin manager" May 13 23:48:27.933243 kubelet[2220]: I0513 23:48:27.933213 2220 state_mem.go:36] "Initialized new in-memory state store" May 13 23:48:27.935936 kubelet[2220]: I0513 23:48:27.935793 2220 kubelet.go:446] "Attempting to sync node with API server" May 13 23:48:27.935936 kubelet[2220]: I0513 23:48:27.935823 2220 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 13 23:48:27.935936 kubelet[2220]: I0513 23:48:27.935849 2220 kubelet.go:352] "Adding apiserver pod source" May 13 23:48:27.935936 kubelet[2220]: I0513 23:48:27.935865 2220 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 13 23:48:27.938989 kubelet[2220]: I0513 23:48:27.938737 2220 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.1" apiVersion="v1" May 13 23:48:27.939384 kubelet[2220]: I0513 23:48:27.939333 2220 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 13 23:48:27.939450 kubelet[2220]: W0513 23:48:27.939388 2220 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.82:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.82:6443: connect: connection refused May 13 23:48:27.939475 kubelet[2220]: E0513 23:48:27.939456 2220 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.82:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.82:6443: connect: connection refused" logger="UnhandledError" May 13 23:48:27.939475 kubelet[2220]: W0513 23:48:27.939468 2220 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 13 23:48:27.940690 kubelet[2220]: I0513 23:48:27.940334 2220 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 13 23:48:27.940690 kubelet[2220]: I0513 23:48:27.940369 2220 server.go:1287] "Started kubelet" May 13 23:48:27.940690 kubelet[2220]: W0513 23:48:27.940382 2220 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.82:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.82:6443: connect: connection refused May 13 23:48:27.940690 kubelet[2220]: E0513 23:48:27.940444 2220 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.82:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.82:6443: connect: connection refused" logger="UnhandledError" May 13 23:48:27.940690 kubelet[2220]: I0513 23:48:27.940474 2220 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 13 23:48:27.941321 kubelet[2220]: I0513 23:48:27.941245 2220 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 13 23:48:27.941659 kubelet[2220]: I0513 23:48:27.941637 2220 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 13 23:48:27.942710 kubelet[2220]: I0513 23:48:27.942682 2220 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 13 23:48:27.943197 kubelet[2220]: I0513 23:48:27.943174 2220 server.go:490] "Adding debug handlers to kubelet server" May 13 23:48:27.947735 kubelet[2220]: I0513 23:48:27.945763 2220 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 13 23:48:27.947735 kubelet[2220]: E0513 23:48:27.946176 2220 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.82:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.82:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183f3b0d1c69c36d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-13 23:48:27.940348781 +0000 UTC m=+0.906646217,LastTimestamp:2025-05-13 23:48:27.940348781 +0000 UTC m=+0.906646217,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 13 23:48:27.947735 kubelet[2220]: E0513 23:48:27.947323 2220 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 23:48:27.947735 kubelet[2220]: I0513 23:48:27.947367 2220 volume_manager.go:297] "Starting Kubelet Volume Manager" May 13 23:48:27.948211 kubelet[2220]: I0513 23:48:27.948055 2220 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 13 23:48:27.948211 kubelet[2220]: I0513 23:48:27.948135 2220 reconciler.go:26] "Reconciler: start to sync state" May 13 23:48:27.949009 kubelet[2220]: W0513 23:48:27.948479 2220 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.82:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.82:6443: connect: connection refused May 13 23:48:27.949009 kubelet[2220]: E0513 23:48:27.948532 2220 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.82:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.82:6443: connect: connection refused" logger="UnhandledError" May 13 23:48:27.949349 kubelet[2220]: E0513 23:48:27.949317 2220 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 13 23:48:27.949490 kubelet[2220]: I0513 23:48:27.949474 2220 factory.go:221] Registration of the systemd container factory successfully May 13 23:48:27.949674 kubelet[2220]: E0513 23:48:27.949636 2220 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.82:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.82:6443: connect: connection refused" interval="200ms" May 13 23:48:27.949674 kubelet[2220]: I0513 23:48:27.949648 2220 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 13 23:48:27.951290 kubelet[2220]: I0513 23:48:27.950753 2220 factory.go:221] Registration of the containerd container factory successfully May 13 23:48:27.962760 kubelet[2220]: I0513 23:48:27.962699 2220 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 13 23:48:27.963217 kubelet[2220]: I0513 23:48:27.962949 2220 cpu_manager.go:221] "Starting CPU manager" policy="none" May 13 23:48:27.963217 kubelet[2220]: I0513 23:48:27.962968 2220 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 13 23:48:27.963217 kubelet[2220]: I0513 23:48:27.962985 2220 state_mem.go:36] "Initialized new in-memory state store" May 13 23:48:27.963986 kubelet[2220]: I0513 23:48:27.963942 2220 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 13 23:48:27.963986 kubelet[2220]: I0513 23:48:27.963976 2220 status_manager.go:227] "Starting to sync pod status with apiserver" May 13 23:48:27.965131 kubelet[2220]: I0513 23:48:27.963997 2220 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 13 23:48:27.965131 kubelet[2220]: I0513 23:48:27.964005 2220 kubelet.go:2388] "Starting kubelet main sync loop" May 13 23:48:27.965131 kubelet[2220]: E0513 23:48:27.964045 2220 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 13 23:48:27.965131 kubelet[2220]: W0513 23:48:27.964983 2220 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.82:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.82:6443: connect: connection refused May 13 23:48:27.965131 kubelet[2220]: E0513 23:48:27.965020 2220 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.82:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.82:6443: connect: connection refused" logger="UnhandledError" May 13 23:48:27.966313 kubelet[2220]: I0513 23:48:27.966106 2220 policy_none.go:49] "None policy: Start" May 13 23:48:27.966313 kubelet[2220]: I0513 23:48:27.966130 2220 memory_manager.go:186] "Starting memorymanager" policy="None" May 13 23:48:27.966313 kubelet[2220]: I0513 23:48:27.966143 2220 state_mem.go:35] "Initializing new in-memory state store" May 13 23:48:27.973930 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 13 23:48:27.994333 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 13 23:48:27.998702 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 13 23:48:28.015618 kubelet[2220]: I0513 23:48:28.015584 2220 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 13 23:48:28.016272 kubelet[2220]: I0513 23:48:28.015938 2220 eviction_manager.go:189] "Eviction manager: starting control loop" May 13 23:48:28.016272 kubelet[2220]: I0513 23:48:28.015956 2220 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 13 23:48:28.016272 kubelet[2220]: I0513 23:48:28.016214 2220 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 13 23:48:28.017203 kubelet[2220]: E0513 23:48:28.017157 2220 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 13 23:48:28.017269 kubelet[2220]: E0513 23:48:28.017235 2220 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 13 23:48:28.073135 systemd[1]: Created slice kubepods-burstable-pod708990f1c3be3131b863e6447823e29d.slice - libcontainer container kubepods-burstable-pod708990f1c3be3131b863e6447823e29d.slice. May 13 23:48:28.095441 kubelet[2220]: E0513 23:48:28.095239 2220 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 13 23:48:28.100411 systemd[1]: Created slice kubepods-burstable-pod5386fe11ed933ab82453de11903c7f47.slice - libcontainer container kubepods-burstable-pod5386fe11ed933ab82453de11903c7f47.slice. May 13 23:48:28.114577 kubelet[2220]: E0513 23:48:28.114518 2220 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 13 23:48:28.117454 systemd[1]: Created slice kubepods-burstable-pod2980a8ab51edc665be10a02e33130e15.slice - libcontainer container kubepods-burstable-pod2980a8ab51edc665be10a02e33130e15.slice. May 13 23:48:28.117594 kubelet[2220]: I0513 23:48:28.117488 2220 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 13 23:48:28.117953 kubelet[2220]: E0513 23:48:28.117924 2220 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.82:6443/api/v1/nodes\": dial tcp 10.0.0.82:6443: connect: connection refused" node="localhost" May 13 23:48:28.119578 kubelet[2220]: E0513 23:48:28.119543 2220 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 13 23:48:28.150738 kubelet[2220]: E0513 23:48:28.150404 2220 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.82:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.82:6443: connect: connection refused" interval="400ms" May 13 23:48:28.250068 kubelet[2220]: I0513 23:48:28.249999 2220 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/708990f1c3be3131b863e6447823e29d-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"708990f1c3be3131b863e6447823e29d\") " pod="kube-system/kube-apiserver-localhost" May 13 23:48:28.250068 kubelet[2220]: I0513 23:48:28.250062 2220 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 13 23:48:28.250284 kubelet[2220]: I0513 23:48:28.250092 2220 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 13 23:48:28.250284 kubelet[2220]: I0513 23:48:28.250126 2220 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 13 23:48:28.250284 kubelet[2220]: I0513 23:48:28.250143 2220 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/708990f1c3be3131b863e6447823e29d-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"708990f1c3be3131b863e6447823e29d\") " pod="kube-system/kube-apiserver-localhost" May 13 23:48:28.250284 kubelet[2220]: I0513 23:48:28.250163 2220 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/708990f1c3be3131b863e6447823e29d-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"708990f1c3be3131b863e6447823e29d\") " pod="kube-system/kube-apiserver-localhost" May 13 23:48:28.250284 kubelet[2220]: I0513 23:48:28.250179 2220 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 13 23:48:28.250409 kubelet[2220]: I0513 23:48:28.250204 2220 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 13 23:48:28.250409 kubelet[2220]: I0513 23:48:28.250224 2220 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2980a8ab51edc665be10a02e33130e15-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"2980a8ab51edc665be10a02e33130e15\") " pod="kube-system/kube-scheduler-localhost" May 13 23:48:28.320214 kubelet[2220]: I0513 23:48:28.320091 2220 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 13 23:48:28.320510 kubelet[2220]: E0513 23:48:28.320476 2220 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.82:6443/api/v1/nodes\": dial tcp 10.0.0.82:6443: connect: connection refused" node="localhost" May 13 23:48:28.397151 containerd[1479]: time="2025-05-13T23:48:28.396845774Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:708990f1c3be3131b863e6447823e29d,Namespace:kube-system,Attempt:0,}" May 13 23:48:28.416240 containerd[1479]: time="2025-05-13T23:48:28.416127520Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:5386fe11ed933ab82453de11903c7f47,Namespace:kube-system,Attempt:0,}" May 13 23:48:28.418316 containerd[1479]: time="2025-05-13T23:48:28.418192239Z" level=info msg="connecting to shim 4ef775062853ffe01765a9478d22638a7d116afcad4e4b4a9e59cf57379d64eb" address="unix:///run/containerd/s/c92aa2c589448e76a93ef752d314e2e8d80e5424d21d76cde06661a01b64a3a2" namespace=k8s.io protocol=ttrpc version=3 May 13 23:48:28.421198 containerd[1479]: time="2025-05-13T23:48:28.421163961Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:2980a8ab51edc665be10a02e33130e15,Namespace:kube-system,Attempt:0,}" May 13 23:48:28.453852 systemd[1]: Started cri-containerd-4ef775062853ffe01765a9478d22638a7d116afcad4e4b4a9e59cf57379d64eb.scope - libcontainer container 4ef775062853ffe01765a9478d22638a7d116afcad4e4b4a9e59cf57379d64eb. May 13 23:48:28.466302 containerd[1479]: time="2025-05-13T23:48:28.466209923Z" level=info msg="connecting to shim eb47df0243219e73575188e4f4c5f9fdd84a2cd13bf2d384ac0dd7f996ce934e" address="unix:///run/containerd/s/c2108c44edb6d59e71b3ef3bdb5d17295085f7dfbde1b2d0712b6cd683e284be" namespace=k8s.io protocol=ttrpc version=3 May 13 23:48:28.485285 containerd[1479]: time="2025-05-13T23:48:28.485197051Z" level=info msg="connecting to shim 70711267c1a1e756987ce337d9afea22e66cf0c3e5197119f548252b6e553cdf" address="unix:///run/containerd/s/c71d353da0765a8881a24587926edd8ea47017c87945463af7d6b0ccc0702844" namespace=k8s.io protocol=ttrpc version=3 May 13 23:48:28.500696 containerd[1479]: time="2025-05-13T23:48:28.500421634Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:708990f1c3be3131b863e6447823e29d,Namespace:kube-system,Attempt:0,} returns sandbox id \"4ef775062853ffe01765a9478d22638a7d116afcad4e4b4a9e59cf57379d64eb\"" May 13 23:48:28.509781 systemd[1]: Started cri-containerd-eb47df0243219e73575188e4f4c5f9fdd84a2cd13bf2d384ac0dd7f996ce934e.scope - libcontainer container eb47df0243219e73575188e4f4c5f9fdd84a2cd13bf2d384ac0dd7f996ce934e. May 13 23:48:28.510696 containerd[1479]: time="2025-05-13T23:48:28.510655714Z" level=info msg="CreateContainer within sandbox \"4ef775062853ffe01765a9478d22638a7d116afcad4e4b4a9e59cf57379d64eb\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 13 23:48:28.513734 systemd[1]: Started cri-containerd-70711267c1a1e756987ce337d9afea22e66cf0c3e5197119f548252b6e553cdf.scope - libcontainer container 70711267c1a1e756987ce337d9afea22e66cf0c3e5197119f548252b6e553cdf. May 13 23:48:28.525389 containerd[1479]: time="2025-05-13T23:48:28.525341509Z" level=info msg="Container 1df8db4c7d3895ee7243024f086b68c863932b020897448761ab22ec686810a8: CDI devices from CRI Config.CDIDevices: []" May 13 23:48:28.543027 containerd[1479]: time="2025-05-13T23:48:28.542977832Z" level=info msg="CreateContainer within sandbox \"4ef775062853ffe01765a9478d22638a7d116afcad4e4b4a9e59cf57379d64eb\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"1df8db4c7d3895ee7243024f086b68c863932b020897448761ab22ec686810a8\"" May 13 23:48:28.544272 containerd[1479]: time="2025-05-13T23:48:28.543596801Z" level=info msg="StartContainer for \"1df8db4c7d3895ee7243024f086b68c863932b020897448761ab22ec686810a8\"" May 13 23:48:28.545286 containerd[1479]: time="2025-05-13T23:48:28.545256790Z" level=info msg="connecting to shim 1df8db4c7d3895ee7243024f086b68c863932b020897448761ab22ec686810a8" address="unix:///run/containerd/s/c92aa2c589448e76a93ef752d314e2e8d80e5424d21d76cde06661a01b64a3a2" protocol=ttrpc version=3 May 13 23:48:28.550902 kubelet[2220]: E0513 23:48:28.550856 2220 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.82:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.82:6443: connect: connection refused" interval="800ms" May 13 23:48:28.558377 containerd[1479]: time="2025-05-13T23:48:28.557931265Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:2980a8ab51edc665be10a02e33130e15,Namespace:kube-system,Attempt:0,} returns sandbox id \"eb47df0243219e73575188e4f4c5f9fdd84a2cd13bf2d384ac0dd7f996ce934e\"" May 13 23:48:28.561314 containerd[1479]: time="2025-05-13T23:48:28.561265492Z" level=info msg="CreateContainer within sandbox \"eb47df0243219e73575188e4f4c5f9fdd84a2cd13bf2d384ac0dd7f996ce934e\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 13 23:48:28.573345 containerd[1479]: time="2025-05-13T23:48:28.573289030Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:5386fe11ed933ab82453de11903c7f47,Namespace:kube-system,Attempt:0,} returns sandbox id \"70711267c1a1e756987ce337d9afea22e66cf0c3e5197119f548252b6e553cdf\"" May 13 23:48:28.574778 systemd[1]: Started cri-containerd-1df8db4c7d3895ee7243024f086b68c863932b020897448761ab22ec686810a8.scope - libcontainer container 1df8db4c7d3895ee7243024f086b68c863932b020897448761ab22ec686810a8. May 13 23:48:28.576712 containerd[1479]: time="2025-05-13T23:48:28.576674262Z" level=info msg="CreateContainer within sandbox \"70711267c1a1e756987ce337d9afea22e66cf0c3e5197119f548252b6e553cdf\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 13 23:48:28.577775 containerd[1479]: time="2025-05-13T23:48:28.577327753Z" level=info msg="Container 033211b5087533bfca7e73e3129ee325e4bd2ad277cdfc45acb883abc5282c5b: CDI devices from CRI Config.CDIDevices: []" May 13 23:48:28.591713 containerd[1479]: time="2025-05-13T23:48:28.591667366Z" level=info msg="Container ec2f42843fd0e281cb530d3c88915ce0070b353a04f0195ebb4818f24a8ff6eb: CDI devices from CRI Config.CDIDevices: []" May 13 23:48:28.591919 containerd[1479]: time="2025-05-13T23:48:28.591868913Z" level=info msg="CreateContainer within sandbox \"eb47df0243219e73575188e4f4c5f9fdd84a2cd13bf2d384ac0dd7f996ce934e\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"033211b5087533bfca7e73e3129ee325e4bd2ad277cdfc45acb883abc5282c5b\"" May 13 23:48:28.592498 containerd[1479]: time="2025-05-13T23:48:28.592464135Z" level=info msg="StartContainer for \"033211b5087533bfca7e73e3129ee325e4bd2ad277cdfc45acb883abc5282c5b\"" May 13 23:48:28.594851 containerd[1479]: time="2025-05-13T23:48:28.594781367Z" level=info msg="connecting to shim 033211b5087533bfca7e73e3129ee325e4bd2ad277cdfc45acb883abc5282c5b" address="unix:///run/containerd/s/c2108c44edb6d59e71b3ef3bdb5d17295085f7dfbde1b2d0712b6cd683e284be" protocol=ttrpc version=3 May 13 23:48:28.606767 containerd[1479]: time="2025-05-13T23:48:28.606691240Z" level=info msg="CreateContainer within sandbox \"70711267c1a1e756987ce337d9afea22e66cf0c3e5197119f548252b6e553cdf\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"ec2f42843fd0e281cb530d3c88915ce0070b353a04f0195ebb4818f24a8ff6eb\"" May 13 23:48:28.607378 containerd[1479]: time="2025-05-13T23:48:28.607348443Z" level=info msg="StartContainer for \"ec2f42843fd0e281cb530d3c88915ce0070b353a04f0195ebb4818f24a8ff6eb\"" May 13 23:48:28.609053 containerd[1479]: time="2025-05-13T23:48:28.609015896Z" level=info msg="connecting to shim ec2f42843fd0e281cb530d3c88915ce0070b353a04f0195ebb4818f24a8ff6eb" address="unix:///run/containerd/s/c71d353da0765a8881a24587926edd8ea47017c87945463af7d6b0ccc0702844" protocol=ttrpc version=3 May 13 23:48:28.625097 containerd[1479]: time="2025-05-13T23:48:28.625041160Z" level=info msg="StartContainer for \"1df8db4c7d3895ee7243024f086b68c863932b020897448761ab22ec686810a8\" returns successfully" May 13 23:48:28.625769 systemd[1]: Started cri-containerd-033211b5087533bfca7e73e3129ee325e4bd2ad277cdfc45acb883abc5282c5b.scope - libcontainer container 033211b5087533bfca7e73e3129ee325e4bd2ad277cdfc45acb883abc5282c5b. May 13 23:48:28.644788 systemd[1]: Started cri-containerd-ec2f42843fd0e281cb530d3c88915ce0070b353a04f0195ebb4818f24a8ff6eb.scope - libcontainer container ec2f42843fd0e281cb530d3c88915ce0070b353a04f0195ebb4818f24a8ff6eb. May 13 23:48:28.721300 containerd[1479]: time="2025-05-13T23:48:28.720982578Z" level=info msg="StartContainer for \"033211b5087533bfca7e73e3129ee325e4bd2ad277cdfc45acb883abc5282c5b\" returns successfully" May 13 23:48:28.730993 containerd[1479]: time="2025-05-13T23:48:28.722405301Z" level=info msg="StartContainer for \"ec2f42843fd0e281cb530d3c88915ce0070b353a04f0195ebb4818f24a8ff6eb\" returns successfully" May 13 23:48:28.731124 kubelet[2220]: I0513 23:48:28.723599 2220 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 13 23:48:28.731124 kubelet[2220]: E0513 23:48:28.724012 2220 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.82:6443/api/v1/nodes\": dial tcp 10.0.0.82:6443: connect: connection refused" node="localhost" May 13 23:48:28.841901 kubelet[2220]: W0513 23:48:28.841842 2220 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.82:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.82:6443: connect: connection refused May 13 23:48:28.842074 kubelet[2220]: E0513 23:48:28.842040 2220 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.82:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.82:6443: connect: connection refused" logger="UnhandledError" May 13 23:48:28.975716 kubelet[2220]: E0513 23:48:28.973903 2220 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 13 23:48:28.978790 kubelet[2220]: E0513 23:48:28.978478 2220 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 13 23:48:28.981599 kubelet[2220]: E0513 23:48:28.981441 2220 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 13 23:48:29.526791 kubelet[2220]: I0513 23:48:29.526336 2220 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 13 23:48:29.982906 kubelet[2220]: E0513 23:48:29.982709 2220 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 13 23:48:29.983026 kubelet[2220]: E0513 23:48:29.982984 2220 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 13 23:48:30.314628 kubelet[2220]: E0513 23:48:30.314516 2220 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" May 13 23:48:30.341220 kubelet[2220]: I0513 23:48:30.341177 2220 kubelet_node_status.go:79] "Successfully registered node" node="localhost" May 13 23:48:30.347930 kubelet[2220]: I0513 23:48:30.347894 2220 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 13 23:48:30.368222 kubelet[2220]: E0513 23:48:30.367992 2220 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" May 13 23:48:30.368222 kubelet[2220]: I0513 23:48:30.368021 2220 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 13 23:48:30.371804 kubelet[2220]: E0513 23:48:30.371779 2220 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" May 13 23:48:30.372085 kubelet[2220]: I0513 23:48:30.371901 2220 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 13 23:48:30.377098 kubelet[2220]: E0513 23:48:30.377061 2220 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" May 13 23:48:30.939887 kubelet[2220]: I0513 23:48:30.939844 2220 apiserver.go:52] "Watching apiserver" May 13 23:48:30.948813 kubelet[2220]: I0513 23:48:30.948766 2220 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 13 23:48:31.393447 kubelet[2220]: I0513 23:48:31.393225 2220 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 13 23:48:32.558977 systemd[1]: Reload requested from client PID 2496 ('systemctl') (unit session-7.scope)... May 13 23:48:32.558998 systemd[1]: Reloading... May 13 23:48:32.651769 zram_generator::config[2545]: No configuration found. May 13 23:48:32.748017 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 23:48:32.852619 systemd[1]: Reloading finished in 293 ms. May 13 23:48:32.880180 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:48:32.892518 systemd[1]: kubelet.service: Deactivated successfully. May 13 23:48:32.892809 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:48:32.892865 systemd[1]: kubelet.service: Consumed 1.315s CPU time, 127.5M memory peak. May 13 23:48:32.895543 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:48:33.048217 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:48:33.061992 (kubelet)[2582]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 13 23:48:33.109282 kubelet[2582]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 23:48:33.109282 kubelet[2582]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 13 23:48:33.109282 kubelet[2582]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 23:48:33.109735 kubelet[2582]: I0513 23:48:33.109282 2582 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 13 23:48:33.116591 kubelet[2582]: I0513 23:48:33.116436 2582 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" May 13 23:48:33.116591 kubelet[2582]: I0513 23:48:33.116475 2582 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 13 23:48:33.116919 kubelet[2582]: I0513 23:48:33.116808 2582 server.go:954] "Client rotation is on, will bootstrap in background" May 13 23:48:33.118190 kubelet[2582]: I0513 23:48:33.118163 2582 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 13 23:48:33.120849 kubelet[2582]: I0513 23:48:33.120808 2582 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 13 23:48:33.126399 kubelet[2582]: I0513 23:48:33.126372 2582 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 13 23:48:33.129599 kubelet[2582]: I0513 23:48:33.129151 2582 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 13 23:48:33.129599 kubelet[2582]: I0513 23:48:33.129387 2582 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 13 23:48:33.129754 kubelet[2582]: I0513 23:48:33.129426 2582 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 13 23:48:33.129754 kubelet[2582]: I0513 23:48:33.129650 2582 topology_manager.go:138] "Creating topology manager with none policy" May 13 23:48:33.129754 kubelet[2582]: I0513 23:48:33.129660 2582 container_manager_linux.go:304] "Creating device plugin manager" May 13 23:48:33.129754 kubelet[2582]: I0513 23:48:33.129705 2582 state_mem.go:36] "Initialized new in-memory state store" May 13 23:48:33.129898 kubelet[2582]: I0513 23:48:33.129843 2582 kubelet.go:446] "Attempting to sync node with API server" May 13 23:48:33.129898 kubelet[2582]: I0513 23:48:33.129857 2582 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 13 23:48:33.129898 kubelet[2582]: I0513 23:48:33.129879 2582 kubelet.go:352] "Adding apiserver pod source" May 13 23:48:33.129898 kubelet[2582]: I0513 23:48:33.129892 2582 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 13 23:48:33.133011 kubelet[2582]: I0513 23:48:33.132624 2582 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.1" apiVersion="v1" May 13 23:48:33.133424 kubelet[2582]: I0513 23:48:33.133123 2582 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 13 23:48:33.133591 kubelet[2582]: I0513 23:48:33.133572 2582 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 13 23:48:33.133627 kubelet[2582]: I0513 23:48:33.133605 2582 server.go:1287] "Started kubelet" May 13 23:48:33.134196 kubelet[2582]: I0513 23:48:33.134049 2582 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 13 23:48:33.135129 kubelet[2582]: I0513 23:48:33.134979 2582 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 13 23:48:33.135129 kubelet[2582]: I0513 23:48:33.135031 2582 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 13 23:48:33.135557 kubelet[2582]: I0513 23:48:33.135346 2582 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 13 23:48:33.135876 kubelet[2582]: I0513 23:48:33.135680 2582 volume_manager.go:297] "Starting Kubelet Volume Manager" May 13 23:48:33.135876 kubelet[2582]: I0513 23:48:33.135834 2582 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 13 23:48:33.137568 kubelet[2582]: I0513 23:48:33.135982 2582 reconciler.go:26] "Reconciler: start to sync state" May 13 23:48:33.137568 kubelet[2582]: I0513 23:48:33.136851 2582 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 13 23:48:33.138619 kubelet[2582]: I0513 23:48:33.137824 2582 server.go:490] "Adding debug handlers to kubelet server" May 13 23:48:33.139081 kubelet[2582]: I0513 23:48:33.139059 2582 factory.go:221] Registration of the containerd container factory successfully May 13 23:48:33.139081 kubelet[2582]: I0513 23:48:33.139078 2582 factory.go:221] Registration of the systemd container factory successfully May 13 23:48:33.139164 kubelet[2582]: I0513 23:48:33.139150 2582 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 13 23:48:33.142563 kubelet[2582]: E0513 23:48:33.142122 2582 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 23:48:33.177289 kubelet[2582]: I0513 23:48:33.177235 2582 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 13 23:48:33.179333 kubelet[2582]: I0513 23:48:33.179293 2582 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 13 23:48:33.179333 kubelet[2582]: I0513 23:48:33.179330 2582 status_manager.go:227] "Starting to sync pod status with apiserver" May 13 23:48:33.179508 kubelet[2582]: I0513 23:48:33.179353 2582 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 13 23:48:33.179508 kubelet[2582]: I0513 23:48:33.179361 2582 kubelet.go:2388] "Starting kubelet main sync loop" May 13 23:48:33.179508 kubelet[2582]: E0513 23:48:33.179417 2582 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 13 23:48:33.196229 kubelet[2582]: I0513 23:48:33.196197 2582 cpu_manager.go:221] "Starting CPU manager" policy="none" May 13 23:48:33.196229 kubelet[2582]: I0513 23:48:33.196217 2582 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 13 23:48:33.196229 kubelet[2582]: I0513 23:48:33.196239 2582 state_mem.go:36] "Initialized new in-memory state store" May 13 23:48:33.196430 kubelet[2582]: I0513 23:48:33.196398 2582 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 13 23:48:33.196473 kubelet[2582]: I0513 23:48:33.196410 2582 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 13 23:48:33.196473 kubelet[2582]: I0513 23:48:33.196442 2582 policy_none.go:49] "None policy: Start" May 13 23:48:33.196473 kubelet[2582]: I0513 23:48:33.196450 2582 memory_manager.go:186] "Starting memorymanager" policy="None" May 13 23:48:33.196473 kubelet[2582]: I0513 23:48:33.196460 2582 state_mem.go:35] "Initializing new in-memory state store" May 13 23:48:33.196629 kubelet[2582]: I0513 23:48:33.196571 2582 state_mem.go:75] "Updated machine memory state" May 13 23:48:33.201343 kubelet[2582]: I0513 23:48:33.201308 2582 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 13 23:48:33.201524 kubelet[2582]: I0513 23:48:33.201500 2582 eviction_manager.go:189] "Eviction manager: starting control loop" May 13 23:48:33.201650 kubelet[2582]: I0513 23:48:33.201519 2582 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 13 23:48:33.201914 kubelet[2582]: I0513 23:48:33.201756 2582 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 13 23:48:33.203414 kubelet[2582]: E0513 23:48:33.203366 2582 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 13 23:48:33.281035 kubelet[2582]: I0513 23:48:33.280978 2582 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 13 23:48:33.281173 kubelet[2582]: I0513 23:48:33.281068 2582 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 13 23:48:33.281173 kubelet[2582]: I0513 23:48:33.280990 2582 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 13 23:48:33.288445 kubelet[2582]: E0513 23:48:33.288405 2582 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" May 13 23:48:33.304117 kubelet[2582]: I0513 23:48:33.304060 2582 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 13 23:48:33.310753 kubelet[2582]: I0513 23:48:33.310715 2582 kubelet_node_status.go:125] "Node was previously registered" node="localhost" May 13 23:48:33.311705 kubelet[2582]: I0513 23:48:33.310911 2582 kubelet_node_status.go:79] "Successfully registered node" node="localhost" May 13 23:48:33.337347 kubelet[2582]: I0513 23:48:33.337293 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/708990f1c3be3131b863e6447823e29d-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"708990f1c3be3131b863e6447823e29d\") " pod="kube-system/kube-apiserver-localhost" May 13 23:48:33.337347 kubelet[2582]: I0513 23:48:33.337339 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 13 23:48:33.337527 kubelet[2582]: I0513 23:48:33.337362 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 13 23:48:33.337527 kubelet[2582]: I0513 23:48:33.337380 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 13 23:48:33.337527 kubelet[2582]: I0513 23:48:33.337399 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/708990f1c3be3131b863e6447823e29d-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"708990f1c3be3131b863e6447823e29d\") " pod="kube-system/kube-apiserver-localhost" May 13 23:48:33.337527 kubelet[2582]: I0513 23:48:33.337437 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/708990f1c3be3131b863e6447823e29d-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"708990f1c3be3131b863e6447823e29d\") " pod="kube-system/kube-apiserver-localhost" May 13 23:48:33.337527 kubelet[2582]: I0513 23:48:33.337481 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 13 23:48:33.337683 kubelet[2582]: I0513 23:48:33.337499 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 13 23:48:33.337683 kubelet[2582]: I0513 23:48:33.337526 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2980a8ab51edc665be10a02e33130e15-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"2980a8ab51edc665be10a02e33130e15\") " pod="kube-system/kube-scheduler-localhost" May 13 23:48:34.131685 kubelet[2582]: I0513 23:48:34.131373 2582 apiserver.go:52] "Watching apiserver" May 13 23:48:34.136775 kubelet[2582]: I0513 23:48:34.136574 2582 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 13 23:48:34.190777 kubelet[2582]: I0513 23:48:34.190735 2582 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 13 23:48:34.191254 kubelet[2582]: I0513 23:48:34.191027 2582 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 13 23:48:34.199179 kubelet[2582]: E0513 23:48:34.199118 2582 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" May 13 23:48:34.200809 kubelet[2582]: E0513 23:48:34.199416 2582 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 13 23:48:34.227967 kubelet[2582]: I0513 23:48:34.227904 2582 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=3.227885058 podStartE2EDuration="3.227885058s" podCreationTimestamp="2025-05-13 23:48:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 23:48:34.213781946 +0000 UTC m=+1.147667820" watchObservedRunningTime="2025-05-13 23:48:34.227885058 +0000 UTC m=+1.161770932" May 13 23:48:34.242039 kubelet[2582]: I0513 23:48:34.241877 2582 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.241857183 podStartE2EDuration="1.241857183s" podCreationTimestamp="2025-05-13 23:48:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 23:48:34.228041141 +0000 UTC m=+1.161927015" watchObservedRunningTime="2025-05-13 23:48:34.241857183 +0000 UTC m=+1.175743057" May 13 23:48:34.242379 kubelet[2582]: I0513 23:48:34.242334 2582 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.242288748 podStartE2EDuration="1.242288748s" podCreationTimestamp="2025-05-13 23:48:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 23:48:34.241797603 +0000 UTC m=+1.175683477" watchObservedRunningTime="2025-05-13 23:48:34.242288748 +0000 UTC m=+1.176174622" May 13 23:48:37.975590 sudo[1676]: pam_unix(sudo:session): session closed for user root May 13 23:48:37.977583 sshd[1675]: Connection closed by 10.0.0.1 port 52672 May 13 23:48:37.978041 sshd-session[1672]: pam_unix(sshd:session): session closed for user core May 13 23:48:37.981953 systemd[1]: sshd@6-10.0.0.82:22-10.0.0.1:52672.service: Deactivated successfully. May 13 23:48:37.986182 systemd[1]: session-7.scope: Deactivated successfully. May 13 23:48:37.986428 systemd[1]: session-7.scope: Consumed 6.539s CPU time, 228.2M memory peak. May 13 23:48:37.987483 systemd-logind[1457]: Session 7 logged out. Waiting for processes to exit. May 13 23:48:37.988468 systemd-logind[1457]: Removed session 7. May 13 23:48:38.580819 kubelet[2582]: I0513 23:48:38.580780 2582 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 13 23:48:38.582192 kubelet[2582]: I0513 23:48:38.581331 2582 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 13 23:48:38.582269 containerd[1479]: time="2025-05-13T23:48:38.581113730Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 13 23:48:39.138465 systemd[1]: Created slice kubepods-besteffort-pod21695e15_27fa_43d4_bde6_5e2526fbf74f.slice - libcontainer container kubepods-besteffort-pod21695e15_27fa_43d4_bde6_5e2526fbf74f.slice. May 13 23:48:39.175172 kubelet[2582]: I0513 23:48:39.175059 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/21695e15-27fa-43d4-bde6-5e2526fbf74f-kube-proxy\") pod \"kube-proxy-qszs6\" (UID: \"21695e15-27fa-43d4-bde6-5e2526fbf74f\") " pod="kube-system/kube-proxy-qszs6" May 13 23:48:39.175172 kubelet[2582]: I0513 23:48:39.175100 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/21695e15-27fa-43d4-bde6-5e2526fbf74f-xtables-lock\") pod \"kube-proxy-qszs6\" (UID: \"21695e15-27fa-43d4-bde6-5e2526fbf74f\") " pod="kube-system/kube-proxy-qszs6" May 13 23:48:39.175172 kubelet[2582]: I0513 23:48:39.175122 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/21695e15-27fa-43d4-bde6-5e2526fbf74f-lib-modules\") pod \"kube-proxy-qszs6\" (UID: \"21695e15-27fa-43d4-bde6-5e2526fbf74f\") " pod="kube-system/kube-proxy-qszs6" May 13 23:48:39.175429 kubelet[2582]: I0513 23:48:39.175174 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8t84v\" (UniqueName: \"kubernetes.io/projected/21695e15-27fa-43d4-bde6-5e2526fbf74f-kube-api-access-8t84v\") pod \"kube-proxy-qszs6\" (UID: \"21695e15-27fa-43d4-bde6-5e2526fbf74f\") " pod="kube-system/kube-proxy-qszs6" May 13 23:48:39.286159 kubelet[2582]: E0513 23:48:39.286108 2582 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found May 13 23:48:39.286159 kubelet[2582]: E0513 23:48:39.286144 2582 projected.go:194] Error preparing data for projected volume kube-api-access-8t84v for pod kube-system/kube-proxy-qszs6: configmap "kube-root-ca.crt" not found May 13 23:48:39.286309 kubelet[2582]: E0513 23:48:39.286203 2582 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/21695e15-27fa-43d4-bde6-5e2526fbf74f-kube-api-access-8t84v podName:21695e15-27fa-43d4-bde6-5e2526fbf74f nodeName:}" failed. No retries permitted until 2025-05-13 23:48:39.786182046 +0000 UTC m=+6.720067920 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-8t84v" (UniqueName: "kubernetes.io/projected/21695e15-27fa-43d4-bde6-5e2526fbf74f-kube-api-access-8t84v") pod "kube-proxy-qszs6" (UID: "21695e15-27fa-43d4-bde6-5e2526fbf74f") : configmap "kube-root-ca.crt" not found May 13 23:48:39.664610 systemd[1]: Created slice kubepods-besteffort-poddd9e7697_8a36_4d3e_bcb5_76a324d92932.slice - libcontainer container kubepods-besteffort-poddd9e7697_8a36_4d3e_bcb5_76a324d92932.slice. May 13 23:48:39.678653 kubelet[2582]: I0513 23:48:39.678599 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/dd9e7697-8a36-4d3e-bcb5-76a324d92932-var-lib-calico\") pod \"tigera-operator-789496d6f5-xs74g\" (UID: \"dd9e7697-8a36-4d3e-bcb5-76a324d92932\") " pod="tigera-operator/tigera-operator-789496d6f5-xs74g" May 13 23:48:39.678653 kubelet[2582]: I0513 23:48:39.678643 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wpgrl\" (UniqueName: \"kubernetes.io/projected/dd9e7697-8a36-4d3e-bcb5-76a324d92932-kube-api-access-wpgrl\") pod \"tigera-operator-789496d6f5-xs74g\" (UID: \"dd9e7697-8a36-4d3e-bcb5-76a324d92932\") " pod="tigera-operator/tigera-operator-789496d6f5-xs74g" May 13 23:48:39.969890 containerd[1479]: time="2025-05-13T23:48:39.969824321Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-789496d6f5-xs74g,Uid:dd9e7697-8a36-4d3e-bcb5-76a324d92932,Namespace:tigera-operator,Attempt:0,}" May 13 23:48:40.004093 containerd[1479]: time="2025-05-13T23:48:40.004036413Z" level=info msg="connecting to shim 38b6d08d5ebd0a091d64507bef0dc4b5d8d4d289c8a0394edcccf813d8697f87" address="unix:///run/containerd/s/391eb846e3873d593e8da7dee707ab702a7c827f64904d5ddac78137cc6cd849" namespace=k8s.io protocol=ttrpc version=3 May 13 23:48:40.026808 systemd[1]: Started cri-containerd-38b6d08d5ebd0a091d64507bef0dc4b5d8d4d289c8a0394edcccf813d8697f87.scope - libcontainer container 38b6d08d5ebd0a091d64507bef0dc4b5d8d4d289c8a0394edcccf813d8697f87. May 13 23:48:40.050789 containerd[1479]: time="2025-05-13T23:48:40.050640738Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qszs6,Uid:21695e15-27fa-43d4-bde6-5e2526fbf74f,Namespace:kube-system,Attempt:0,}" May 13 23:48:40.062145 containerd[1479]: time="2025-05-13T23:48:40.062097753Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-789496d6f5-xs74g,Uid:dd9e7697-8a36-4d3e-bcb5-76a324d92932,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"38b6d08d5ebd0a091d64507bef0dc4b5d8d4d289c8a0394edcccf813d8697f87\"" May 13 23:48:40.064622 containerd[1479]: time="2025-05-13T23:48:40.064452466Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\"" May 13 23:48:40.072167 containerd[1479]: time="2025-05-13T23:48:40.072042860Z" level=info msg="connecting to shim 79363eab742a9a148b173f2282fa97e11859574b497ad939c63262fb4ec7e6b5" address="unix:///run/containerd/s/48c35bffa714618955818c5de360d0b95b90f779e4f302278e591ffb51e98fbe" namespace=k8s.io protocol=ttrpc version=3 May 13 23:48:40.098773 systemd[1]: Started cri-containerd-79363eab742a9a148b173f2282fa97e11859574b497ad939c63262fb4ec7e6b5.scope - libcontainer container 79363eab742a9a148b173f2282fa97e11859574b497ad939c63262fb4ec7e6b5. May 13 23:48:40.124012 containerd[1479]: time="2025-05-13T23:48:40.123877025Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qszs6,Uid:21695e15-27fa-43d4-bde6-5e2526fbf74f,Namespace:kube-system,Attempt:0,} returns sandbox id \"79363eab742a9a148b173f2282fa97e11859574b497ad939c63262fb4ec7e6b5\"" May 13 23:48:40.127534 containerd[1479]: time="2025-05-13T23:48:40.127503086Z" level=info msg="CreateContainer within sandbox \"79363eab742a9a148b173f2282fa97e11859574b497ad939c63262fb4ec7e6b5\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 13 23:48:40.149861 containerd[1479]: time="2025-05-13T23:48:40.149815835Z" level=info msg="Container 881c55548d70b9ab20ce7e3eaa9b5d5d35e557574483cfc49fae867d6c59fd4a: CDI devices from CRI Config.CDIDevices: []" May 13 23:48:40.157322 containerd[1479]: time="2025-05-13T23:48:40.157275025Z" level=info msg="CreateContainer within sandbox \"79363eab742a9a148b173f2282fa97e11859574b497ad939c63262fb4ec7e6b5\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"881c55548d70b9ab20ce7e3eaa9b5d5d35e557574483cfc49fae867d6c59fd4a\"" May 13 23:48:40.158146 containerd[1479]: time="2025-05-13T23:48:40.157840416Z" level=info msg="StartContainer for \"881c55548d70b9ab20ce7e3eaa9b5d5d35e557574483cfc49fae867d6c59fd4a\"" May 13 23:48:40.159518 containerd[1479]: time="2025-05-13T23:48:40.159487450Z" level=info msg="connecting to shim 881c55548d70b9ab20ce7e3eaa9b5d5d35e557574483cfc49fae867d6c59fd4a" address="unix:///run/containerd/s/48c35bffa714618955818c5de360d0b95b90f779e4f302278e591ffb51e98fbe" protocol=ttrpc version=3 May 13 23:48:40.186753 systemd[1]: Started cri-containerd-881c55548d70b9ab20ce7e3eaa9b5d5d35e557574483cfc49fae867d6c59fd4a.scope - libcontainer container 881c55548d70b9ab20ce7e3eaa9b5d5d35e557574483cfc49fae867d6c59fd4a. May 13 23:48:40.229464 containerd[1479]: time="2025-05-13T23:48:40.229313830Z" level=info msg="StartContainer for \"881c55548d70b9ab20ce7e3eaa9b5d5d35e557574483cfc49fae867d6c59fd4a\" returns successfully" May 13 23:48:41.229976 kubelet[2582]: I0513 23:48:41.229902 2582 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-qszs6" podStartSLOduration=2.229883507 podStartE2EDuration="2.229883507s" podCreationTimestamp="2025-05-13 23:48:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 23:48:41.227803084 +0000 UTC m=+8.161688958" watchObservedRunningTime="2025-05-13 23:48:41.229883507 +0000 UTC m=+8.163769341" May 13 23:48:42.084988 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1932648081.mount: Deactivated successfully. May 13 23:48:42.366903 containerd[1479]: time="2025-05-13T23:48:42.366530366Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:48:42.367326 containerd[1479]: time="2025-05-13T23:48:42.367293516Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.7: active requests=0, bytes read=19323084" May 13 23:48:42.368105 containerd[1479]: time="2025-05-13T23:48:42.368079193Z" level=info msg="ImageCreate event name:\"sha256:27f7c2cfac802523e44ecd16453a4cc992f6c7d610c13054f2715a7cb4370565\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:48:42.370092 containerd[1479]: time="2025-05-13T23:48:42.370034462Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:48:42.370706 containerd[1479]: time="2025-05-13T23:48:42.370604073Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.7\" with image id \"sha256:27f7c2cfac802523e44ecd16453a4cc992f6c7d610c13054f2715a7cb4370565\", repo tag \"quay.io/tigera/operator:v1.36.7\", repo digest \"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\", size \"19319079\" in 2.305944698s" May 13 23:48:42.370706 containerd[1479]: time="2025-05-13T23:48:42.370644285Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\" returns image reference \"sha256:27f7c2cfac802523e44ecd16453a4cc992f6c7d610c13054f2715a7cb4370565\"" May 13 23:48:42.374115 containerd[1479]: time="2025-05-13T23:48:42.373735336Z" level=info msg="CreateContainer within sandbox \"38b6d08d5ebd0a091d64507bef0dc4b5d8d4d289c8a0394edcccf813d8697f87\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" May 13 23:48:42.384583 containerd[1479]: time="2025-05-13T23:48:42.383569098Z" level=info msg="Container 1e069437f7fe727dd1c860e4a4c0600e1eb1d54160248593c96dd1ad151eac8f: CDI devices from CRI Config.CDIDevices: []" May 13 23:48:42.386128 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3803170271.mount: Deactivated successfully. May 13 23:48:42.392885 containerd[1479]: time="2025-05-13T23:48:42.392600458Z" level=info msg="CreateContainer within sandbox \"38b6d08d5ebd0a091d64507bef0dc4b5d8d4d289c8a0394edcccf813d8697f87\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"1e069437f7fe727dd1c860e4a4c0600e1eb1d54160248593c96dd1ad151eac8f\"" May 13 23:48:42.393501 containerd[1479]: time="2025-05-13T23:48:42.393364088Z" level=info msg="StartContainer for \"1e069437f7fe727dd1c860e4a4c0600e1eb1d54160248593c96dd1ad151eac8f\"" May 13 23:48:42.394745 containerd[1479]: time="2025-05-13T23:48:42.394718536Z" level=info msg="connecting to shim 1e069437f7fe727dd1c860e4a4c0600e1eb1d54160248593c96dd1ad151eac8f" address="unix:///run/containerd/s/391eb846e3873d593e8da7dee707ab702a7c827f64904d5ddac78137cc6cd849" protocol=ttrpc version=3 May 13 23:48:42.432772 systemd[1]: Started cri-containerd-1e069437f7fe727dd1c860e4a4c0600e1eb1d54160248593c96dd1ad151eac8f.scope - libcontainer container 1e069437f7fe727dd1c860e4a4c0600e1eb1d54160248593c96dd1ad151eac8f. May 13 23:48:42.506122 containerd[1479]: time="2025-05-13T23:48:42.506000451Z" level=info msg="StartContainer for \"1e069437f7fe727dd1c860e4a4c0600e1eb1d54160248593c96dd1ad151eac8f\" returns successfully" May 13 23:48:43.259422 kubelet[2582]: I0513 23:48:43.259332 2582 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-789496d6f5-xs74g" podStartSLOduration=1.95061172 podStartE2EDuration="4.259314195s" podCreationTimestamp="2025-05-13 23:48:39 +0000 UTC" firstStartedPulling="2025-05-13 23:48:40.063881154 +0000 UTC m=+6.997767028" lastFinishedPulling="2025-05-13 23:48:42.372583629 +0000 UTC m=+9.306469503" observedRunningTime="2025-05-13 23:48:43.259111537 +0000 UTC m=+10.192997411" watchObservedRunningTime="2025-05-13 23:48:43.259314195 +0000 UTC m=+10.193200069" May 13 23:48:45.977073 systemd[1]: Created slice kubepods-besteffort-podf094eed0_423a_4379_b6d6_2e00470fb00b.slice - libcontainer container kubepods-besteffort-podf094eed0_423a_4379_b6d6_2e00470fb00b.slice. May 13 23:48:46.038604 systemd[1]: Created slice kubepods-besteffort-pod0e403b97_25e9_4de6_8b6b_5ea21eff4eaa.slice - libcontainer container kubepods-besteffort-pod0e403b97_25e9_4de6_8b6b_5ea21eff4eaa.slice. May 13 23:48:46.043038 kubelet[2582]: I0513 23:48:46.042993 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/0e403b97-25e9-4de6-8b6b-5ea21eff4eaa-var-run-calico\") pod \"calico-node-gmck2\" (UID: \"0e403b97-25e9-4de6-8b6b-5ea21eff4eaa\") " pod="calico-system/calico-node-gmck2" May 13 23:48:46.043038 kubelet[2582]: I0513 23:48:46.043043 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/0e403b97-25e9-4de6-8b6b-5ea21eff4eaa-cni-log-dir\") pod \"calico-node-gmck2\" (UID: \"0e403b97-25e9-4de6-8b6b-5ea21eff4eaa\") " pod="calico-system/calico-node-gmck2" May 13 23:48:46.043369 kubelet[2582]: I0513 23:48:46.043065 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/f094eed0-423a-4379-b6d6-2e00470fb00b-typha-certs\") pod \"calico-typha-5fb67b9d-pwmv4\" (UID: \"f094eed0-423a-4379-b6d6-2e00470fb00b\") " pod="calico-system/calico-typha-5fb67b9d-pwmv4" May 13 23:48:46.043369 kubelet[2582]: I0513 23:48:46.043085 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/0e403b97-25e9-4de6-8b6b-5ea21eff4eaa-var-lib-calico\") pod \"calico-node-gmck2\" (UID: \"0e403b97-25e9-4de6-8b6b-5ea21eff4eaa\") " pod="calico-system/calico-node-gmck2" May 13 23:48:46.043369 kubelet[2582]: I0513 23:48:46.043101 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f094eed0-423a-4379-b6d6-2e00470fb00b-tigera-ca-bundle\") pod \"calico-typha-5fb67b9d-pwmv4\" (UID: \"f094eed0-423a-4379-b6d6-2e00470fb00b\") " pod="calico-system/calico-typha-5fb67b9d-pwmv4" May 13 23:48:46.043369 kubelet[2582]: I0513 23:48:46.043119 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/0e403b97-25e9-4de6-8b6b-5ea21eff4eaa-policysync\") pod \"calico-node-gmck2\" (UID: \"0e403b97-25e9-4de6-8b6b-5ea21eff4eaa\") " pod="calico-system/calico-node-gmck2" May 13 23:48:46.043369 kubelet[2582]: I0513 23:48:46.043138 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0e403b97-25e9-4de6-8b6b-5ea21eff4eaa-xtables-lock\") pod \"calico-node-gmck2\" (UID: \"0e403b97-25e9-4de6-8b6b-5ea21eff4eaa\") " pod="calico-system/calico-node-gmck2" May 13 23:48:46.043496 kubelet[2582]: I0513 23:48:46.043156 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/0e403b97-25e9-4de6-8b6b-5ea21eff4eaa-flexvol-driver-host\") pod \"calico-node-gmck2\" (UID: \"0e403b97-25e9-4de6-8b6b-5ea21eff4eaa\") " pod="calico-system/calico-node-gmck2" May 13 23:48:46.043496 kubelet[2582]: I0513 23:48:46.043174 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/0e403b97-25e9-4de6-8b6b-5ea21eff4eaa-cni-bin-dir\") pod \"calico-node-gmck2\" (UID: \"0e403b97-25e9-4de6-8b6b-5ea21eff4eaa\") " pod="calico-system/calico-node-gmck2" May 13 23:48:46.043496 kubelet[2582]: I0513 23:48:46.043190 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/0e403b97-25e9-4de6-8b6b-5ea21eff4eaa-cni-net-dir\") pod \"calico-node-gmck2\" (UID: \"0e403b97-25e9-4de6-8b6b-5ea21eff4eaa\") " pod="calico-system/calico-node-gmck2" May 13 23:48:46.043496 kubelet[2582]: I0513 23:48:46.043205 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0e403b97-25e9-4de6-8b6b-5ea21eff4eaa-lib-modules\") pod \"calico-node-gmck2\" (UID: \"0e403b97-25e9-4de6-8b6b-5ea21eff4eaa\") " pod="calico-system/calico-node-gmck2" May 13 23:48:46.043496 kubelet[2582]: I0513 23:48:46.043220 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0e403b97-25e9-4de6-8b6b-5ea21eff4eaa-tigera-ca-bundle\") pod \"calico-node-gmck2\" (UID: \"0e403b97-25e9-4de6-8b6b-5ea21eff4eaa\") " pod="calico-system/calico-node-gmck2" May 13 23:48:46.043651 kubelet[2582]: I0513 23:48:46.043260 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lln2x\" (UniqueName: \"kubernetes.io/projected/0e403b97-25e9-4de6-8b6b-5ea21eff4eaa-kube-api-access-lln2x\") pod \"calico-node-gmck2\" (UID: \"0e403b97-25e9-4de6-8b6b-5ea21eff4eaa\") " pod="calico-system/calico-node-gmck2" May 13 23:48:46.043651 kubelet[2582]: I0513 23:48:46.043307 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btcgv\" (UniqueName: \"kubernetes.io/projected/f094eed0-423a-4379-b6d6-2e00470fb00b-kube-api-access-btcgv\") pod \"calico-typha-5fb67b9d-pwmv4\" (UID: \"f094eed0-423a-4379-b6d6-2e00470fb00b\") " pod="calico-system/calico-typha-5fb67b9d-pwmv4" May 13 23:48:46.043651 kubelet[2582]: I0513 23:48:46.043331 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/0e403b97-25e9-4de6-8b6b-5ea21eff4eaa-node-certs\") pod \"calico-node-gmck2\" (UID: \"0e403b97-25e9-4de6-8b6b-5ea21eff4eaa\") " pod="calico-system/calico-node-gmck2" May 13 23:48:46.140588 kubelet[2582]: E0513 23:48:46.140499 2582 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wbtkz" podUID="e19e4307-fe13-490d-a3e6-6829c87953d9" May 13 23:48:46.144469 kubelet[2582]: I0513 23:48:46.143608 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e19e4307-fe13-490d-a3e6-6829c87953d9-kubelet-dir\") pod \"csi-node-driver-wbtkz\" (UID: \"e19e4307-fe13-490d-a3e6-6829c87953d9\") " pod="calico-system/csi-node-driver-wbtkz" May 13 23:48:46.144469 kubelet[2582]: I0513 23:48:46.143705 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n2bd7\" (UniqueName: \"kubernetes.io/projected/e19e4307-fe13-490d-a3e6-6829c87953d9-kube-api-access-n2bd7\") pod \"csi-node-driver-wbtkz\" (UID: \"e19e4307-fe13-490d-a3e6-6829c87953d9\") " pod="calico-system/csi-node-driver-wbtkz" May 13 23:48:46.144469 kubelet[2582]: I0513 23:48:46.143728 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/e19e4307-fe13-490d-a3e6-6829c87953d9-varrun\") pod \"csi-node-driver-wbtkz\" (UID: \"e19e4307-fe13-490d-a3e6-6829c87953d9\") " pod="calico-system/csi-node-driver-wbtkz" May 13 23:48:46.144469 kubelet[2582]: I0513 23:48:46.143885 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/e19e4307-fe13-490d-a3e6-6829c87953d9-socket-dir\") pod \"csi-node-driver-wbtkz\" (UID: \"e19e4307-fe13-490d-a3e6-6829c87953d9\") " pod="calico-system/csi-node-driver-wbtkz" May 13 23:48:46.144469 kubelet[2582]: I0513 23:48:46.143907 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/e19e4307-fe13-490d-a3e6-6829c87953d9-registration-dir\") pod \"csi-node-driver-wbtkz\" (UID: \"e19e4307-fe13-490d-a3e6-6829c87953d9\") " pod="calico-system/csi-node-driver-wbtkz" May 13 23:48:46.149901 kubelet[2582]: E0513 23:48:46.149856 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:48:46.149901 kubelet[2582]: W0513 23:48:46.149889 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:48:46.150293 kubelet[2582]: E0513 23:48:46.149921 2582 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:48:46.151249 kubelet[2582]: E0513 23:48:46.151215 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:48:46.151249 kubelet[2582]: W0513 23:48:46.151244 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:48:46.151370 kubelet[2582]: E0513 23:48:46.151262 2582 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:48:46.152872 kubelet[2582]: E0513 23:48:46.152845 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:48:46.152872 kubelet[2582]: W0513 23:48:46.152867 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:48:46.152965 kubelet[2582]: E0513 23:48:46.152887 2582 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:48:46.155416 kubelet[2582]: E0513 23:48:46.155390 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:48:46.155416 kubelet[2582]: W0513 23:48:46.155408 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:48:46.155542 kubelet[2582]: E0513 23:48:46.155432 2582 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:48:46.157371 kubelet[2582]: E0513 23:48:46.157323 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:48:46.157371 kubelet[2582]: W0513 23:48:46.157351 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:48:46.157371 kubelet[2582]: E0513 23:48:46.157370 2582 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:48:46.159796 kubelet[2582]: E0513 23:48:46.159765 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:48:46.159796 kubelet[2582]: W0513 23:48:46.159788 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:48:46.161601 kubelet[2582]: E0513 23:48:46.159810 2582 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:48:46.164663 kubelet[2582]: E0513 23:48:46.164635 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:48:46.164663 kubelet[2582]: W0513 23:48:46.164655 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:48:46.164779 kubelet[2582]: E0513 23:48:46.164677 2582 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:48:46.173058 kubelet[2582]: E0513 23:48:46.169675 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:48:46.173058 kubelet[2582]: W0513 23:48:46.169709 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:48:46.173058 kubelet[2582]: E0513 23:48:46.169737 2582 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:48:46.173058 kubelet[2582]: E0513 23:48:46.170068 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:48:46.173058 kubelet[2582]: W0513 23:48:46.170084 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:48:46.173058 kubelet[2582]: E0513 23:48:46.170102 2582 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:48:46.173058 kubelet[2582]: E0513 23:48:46.170480 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:48:46.173058 kubelet[2582]: W0513 23:48:46.170500 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:48:46.173058 kubelet[2582]: E0513 23:48:46.170787 2582 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:48:46.173058 kubelet[2582]: E0513 23:48:46.170950 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:48:46.173415 kubelet[2582]: W0513 23:48:46.170963 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:48:46.173415 kubelet[2582]: E0513 23:48:46.171080 2582 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:48:46.177682 kubelet[2582]: E0513 23:48:46.175530 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:48:46.177682 kubelet[2582]: W0513 23:48:46.175583 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:48:46.177682 kubelet[2582]: E0513 23:48:46.175847 2582 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:48:46.177682 kubelet[2582]: E0513 23:48:46.176066 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:48:46.177682 kubelet[2582]: W0513 23:48:46.176076 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:48:46.177682 kubelet[2582]: E0513 23:48:46.176167 2582 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:48:46.177682 kubelet[2582]: E0513 23:48:46.176354 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:48:46.177682 kubelet[2582]: W0513 23:48:46.176364 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:48:46.177682 kubelet[2582]: E0513 23:48:46.176731 2582 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:48:46.178419 kubelet[2582]: E0513 23:48:46.178390 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:48:46.178419 kubelet[2582]: W0513 23:48:46.178410 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:48:46.178520 kubelet[2582]: E0513 23:48:46.178492 2582 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:48:46.179845 kubelet[2582]: E0513 23:48:46.179819 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:48:46.179845 kubelet[2582]: W0513 23:48:46.179842 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:48:46.180037 kubelet[2582]: E0513 23:48:46.180001 2582 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:48:46.180458 kubelet[2582]: E0513 23:48:46.180436 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:48:46.180458 kubelet[2582]: W0513 23:48:46.180456 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:48:46.180572 kubelet[2582]: E0513 23:48:46.180495 2582 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:48:46.181120 kubelet[2582]: E0513 23:48:46.181104 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:48:46.181171 kubelet[2582]: W0513 23:48:46.181120 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:48:46.181279 kubelet[2582]: E0513 23:48:46.181185 2582 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:48:46.181533 kubelet[2582]: E0513 23:48:46.181519 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:48:46.181533 kubelet[2582]: W0513 23:48:46.181532 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:48:46.181648 kubelet[2582]: E0513 23:48:46.181579 2582 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:48:46.182297 kubelet[2582]: E0513 23:48:46.182277 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:48:46.182297 kubelet[2582]: W0513 23:48:46.182294 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:48:46.182382 kubelet[2582]: E0513 23:48:46.182336 2582 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:48:46.183216 kubelet[2582]: E0513 23:48:46.183130 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:48:46.183216 kubelet[2582]: W0513 23:48:46.183147 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:48:46.183322 kubelet[2582]: E0513 23:48:46.183216 2582 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:48:46.183767 kubelet[2582]: E0513 23:48:46.183747 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:48:46.183767 kubelet[2582]: W0513 23:48:46.183763 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:48:46.183877 kubelet[2582]: E0513 23:48:46.183810 2582 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:48:46.184775 kubelet[2582]: E0513 23:48:46.184754 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:48:46.184775 kubelet[2582]: W0513 23:48:46.184771 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:48:46.184866 kubelet[2582]: E0513 23:48:46.184820 2582 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:48:46.184981 kubelet[2582]: E0513 23:48:46.184920 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:48:46.184981 kubelet[2582]: W0513 23:48:46.184927 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:48:46.185140 kubelet[2582]: E0513 23:48:46.185096 2582 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:48:46.185140 kubelet[2582]: E0513 23:48:46.185129 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:48:46.185140 kubelet[2582]: W0513 23:48:46.185141 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:48:46.185242 kubelet[2582]: E0513 23:48:46.185173 2582 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:48:46.185942 kubelet[2582]: E0513 23:48:46.185859 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:48:46.185942 kubelet[2582]: W0513 23:48:46.185875 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:48:46.185942 kubelet[2582]: E0513 23:48:46.185912 2582 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:48:46.186312 kubelet[2582]: E0513 23:48:46.186286 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:48:46.186312 kubelet[2582]: W0513 23:48:46.186303 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:48:46.186548 kubelet[2582]: E0513 23:48:46.186483 2582 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:48:46.191055 kubelet[2582]: E0513 23:48:46.188686 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:48:46.191055 kubelet[2582]: W0513 23:48:46.188708 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:48:46.191055 kubelet[2582]: E0513 23:48:46.188752 2582 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:48:46.191055 kubelet[2582]: E0513 23:48:46.189077 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:48:46.191055 kubelet[2582]: W0513 23:48:46.189089 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:48:46.191055 kubelet[2582]: E0513 23:48:46.189123 2582 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:48:46.191055 kubelet[2582]: E0513 23:48:46.189334 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:48:46.191055 kubelet[2582]: W0513 23:48:46.189357 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:48:46.191055 kubelet[2582]: E0513 23:48:46.189405 2582 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:48:46.191055 kubelet[2582]: E0513 23:48:46.189545 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:48:46.191364 kubelet[2582]: W0513 23:48:46.189566 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:48:46.191364 kubelet[2582]: E0513 23:48:46.189578 2582 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:48:46.191364 kubelet[2582]: E0513 23:48:46.190189 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:48:46.191364 kubelet[2582]: W0513 23:48:46.190207 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:48:46.191364 kubelet[2582]: E0513 23:48:46.190219 2582 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:48:46.191364 kubelet[2582]: E0513 23:48:46.191008 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:48:46.191364 kubelet[2582]: W0513 23:48:46.191023 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:48:46.191364 kubelet[2582]: E0513 23:48:46.191053 2582 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:48:46.209439 kubelet[2582]: E0513 23:48:46.207967 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:48:46.209439 kubelet[2582]: W0513 23:48:46.208145 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:48:46.209439 kubelet[2582]: E0513 23:48:46.208169 2582 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:48:46.246438 kubelet[2582]: E0513 23:48:46.245132 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:48:46.246438 kubelet[2582]: W0513 23:48:46.245173 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:48:46.246438 kubelet[2582]: E0513 23:48:46.245194 2582 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:48:46.246438 kubelet[2582]: E0513 23:48:46.245664 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:48:46.246438 kubelet[2582]: W0513 23:48:46.245677 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:48:46.246438 kubelet[2582]: E0513 23:48:46.245691 2582 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:48:46.246896 kubelet[2582]: E0513 23:48:46.246549 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:48:46.246896 kubelet[2582]: W0513 23:48:46.246575 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:48:46.246896 kubelet[2582]: E0513 23:48:46.246595 2582 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:48:46.247110 kubelet[2582]: E0513 23:48:46.247094 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:48:46.247226 kubelet[2582]: W0513 23:48:46.247211 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:48:46.247518 kubelet[2582]: E0513 23:48:46.247365 2582 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:48:46.248254 kubelet[2582]: E0513 23:48:46.247609 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:48:46.248254 kubelet[2582]: W0513 23:48:46.247633 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:48:46.248254 kubelet[2582]: E0513 23:48:46.247654 2582 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:48:46.248254 kubelet[2582]: E0513 23:48:46.247836 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:48:46.248254 kubelet[2582]: W0513 23:48:46.247847 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:48:46.248254 kubelet[2582]: E0513 23:48:46.247894 2582 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:48:46.248254 kubelet[2582]: E0513 23:48:46.248124 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:48:46.248254 kubelet[2582]: W0513 23:48:46.248133 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:48:46.248254 kubelet[2582]: E0513 23:48:46.248170 2582 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:48:46.248481 kubelet[2582]: E0513 23:48:46.248293 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:48:46.248481 kubelet[2582]: W0513 23:48:46.248301 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:48:46.248481 kubelet[2582]: E0513 23:48:46.248337 2582 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:48:46.249576 kubelet[2582]: E0513 23:48:46.249362 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:48:46.249576 kubelet[2582]: W0513 23:48:46.249378 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:48:46.249576 kubelet[2582]: E0513 23:48:46.249390 2582 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:48:46.250593 kubelet[2582]: E0513 23:48:46.249656 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:48:46.250593 kubelet[2582]: W0513 23:48:46.249667 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:48:46.250593 kubelet[2582]: E0513 23:48:46.249711 2582 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:48:46.250593 kubelet[2582]: E0513 23:48:46.249861 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:48:46.250593 kubelet[2582]: W0513 23:48:46.249870 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:48:46.250593 kubelet[2582]: E0513 23:48:46.249895 2582 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:48:46.250593 kubelet[2582]: E0513 23:48:46.250160 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:48:46.250593 kubelet[2582]: W0513 23:48:46.250175 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:48:46.250593 kubelet[2582]: E0513 23:48:46.250204 2582 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:48:46.250593 kubelet[2582]: E0513 23:48:46.250342 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:48:46.251119 kubelet[2582]: W0513 23:48:46.250351 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:48:46.251119 kubelet[2582]: E0513 23:48:46.250380 2582 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:48:46.251119 kubelet[2582]: E0513 23:48:46.250521 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:48:46.251119 kubelet[2582]: W0513 23:48:46.250529 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:48:46.251119 kubelet[2582]: E0513 23:48:46.250578 2582 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:48:46.251119 kubelet[2582]: E0513 23:48:46.250750 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:48:46.251119 kubelet[2582]: W0513 23:48:46.250760 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:48:46.251119 kubelet[2582]: E0513 23:48:46.250774 2582 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:48:46.252153 kubelet[2582]: E0513 23:48:46.251312 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:48:46.252153 kubelet[2582]: W0513 23:48:46.251329 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:48:46.252153 kubelet[2582]: E0513 23:48:46.251347 2582 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:48:46.252153 kubelet[2582]: E0513 23:48:46.251643 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:48:46.252153 kubelet[2582]: W0513 23:48:46.251656 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:48:46.252153 kubelet[2582]: E0513 23:48:46.251674 2582 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:48:46.252153 kubelet[2582]: E0513 23:48:46.252025 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:48:46.252153 kubelet[2582]: W0513 23:48:46.252040 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:48:46.252153 kubelet[2582]: E0513 23:48:46.252072 2582 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:48:46.253285 kubelet[2582]: E0513 23:48:46.252675 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:48:46.253285 kubelet[2582]: W0513 23:48:46.252692 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:48:46.253285 kubelet[2582]: E0513 23:48:46.252721 2582 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:48:46.253285 kubelet[2582]: E0513 23:48:46.252901 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:48:46.253285 kubelet[2582]: W0513 23:48:46.252913 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:48:46.253285 kubelet[2582]: E0513 23:48:46.253008 2582 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:48:46.253285 kubelet[2582]: E0513 23:48:46.253100 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:48:46.253285 kubelet[2582]: W0513 23:48:46.253111 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:48:46.253285 kubelet[2582]: E0513 23:48:46.253129 2582 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:48:46.254453 kubelet[2582]: E0513 23:48:46.253674 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:48:46.254453 kubelet[2582]: W0513 23:48:46.253691 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:48:46.254453 kubelet[2582]: E0513 23:48:46.253709 2582 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:48:46.254453 kubelet[2582]: E0513 23:48:46.253936 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:48:46.254453 kubelet[2582]: W0513 23:48:46.253947 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:48:46.254453 kubelet[2582]: E0513 23:48:46.253963 2582 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:48:46.254453 kubelet[2582]: E0513 23:48:46.254392 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:48:46.254453 kubelet[2582]: W0513 23:48:46.254409 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:48:46.254453 kubelet[2582]: E0513 23:48:46.254444 2582 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:48:46.255248 kubelet[2582]: E0513 23:48:46.255227 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:48:46.255248 kubelet[2582]: W0513 23:48:46.255246 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:48:46.255327 kubelet[2582]: E0513 23:48:46.255266 2582 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:48:46.267290 kubelet[2582]: E0513 23:48:46.267207 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:48:46.267290 kubelet[2582]: W0513 23:48:46.267228 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:48:46.267290 kubelet[2582]: E0513 23:48:46.267244 2582 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:48:46.284421 containerd[1479]: time="2025-05-13T23:48:46.284270649Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5fb67b9d-pwmv4,Uid:f094eed0-423a-4379-b6d6-2e00470fb00b,Namespace:calico-system,Attempt:0,}" May 13 23:48:46.342390 containerd[1479]: time="2025-05-13T23:48:46.342151099Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-gmck2,Uid:0e403b97-25e9-4de6-8b6b-5ea21eff4eaa,Namespace:calico-system,Attempt:0,}" May 13 23:48:46.355632 containerd[1479]: time="2025-05-13T23:48:46.355584720Z" level=info msg="connecting to shim 6d9aed5f15bbbab9d38ca1248eb2807fb06367fea5902fec83d6a113143201f6" address="unix:///run/containerd/s/ad3be3c484cf24c056921c56607d7ab0391dd1dbefe79ef3690ccd60361a6274" namespace=k8s.io protocol=ttrpc version=3 May 13 23:48:46.367194 containerd[1479]: time="2025-05-13T23:48:46.367139605Z" level=info msg="connecting to shim ae00e4fc8b21b2932e4b257276e555b217f638571b05014caac4b81e8afb5e06" address="unix:///run/containerd/s/f6ed45c990143c30436c20c48b620b03ee45800586ff217d6037bba229d72f08" namespace=k8s.io protocol=ttrpc version=3 May 13 23:48:46.377763 systemd[1]: Started cri-containerd-6d9aed5f15bbbab9d38ca1248eb2807fb06367fea5902fec83d6a113143201f6.scope - libcontainer container 6d9aed5f15bbbab9d38ca1248eb2807fb06367fea5902fec83d6a113143201f6. May 13 23:48:46.386056 systemd[1]: Started cri-containerd-ae00e4fc8b21b2932e4b257276e555b217f638571b05014caac4b81e8afb5e06.scope - libcontainer container ae00e4fc8b21b2932e4b257276e555b217f638571b05014caac4b81e8afb5e06. May 13 23:48:46.431205 containerd[1479]: time="2025-05-13T23:48:46.431164188Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-gmck2,Uid:0e403b97-25e9-4de6-8b6b-5ea21eff4eaa,Namespace:calico-system,Attempt:0,} returns sandbox id \"ae00e4fc8b21b2932e4b257276e555b217f638571b05014caac4b81e8afb5e06\"" May 13 23:48:46.433825 containerd[1479]: time="2025-05-13T23:48:46.433542885Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\"" May 13 23:48:46.441940 containerd[1479]: time="2025-05-13T23:48:46.441879269Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5fb67b9d-pwmv4,Uid:f094eed0-423a-4379-b6d6-2e00470fb00b,Namespace:calico-system,Attempt:0,} returns sandbox id \"6d9aed5f15bbbab9d38ca1248eb2807fb06367fea5902fec83d6a113143201f6\"" May 13 23:48:47.252207 kubelet[2582]: E0513 23:48:47.252178 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:48:47.252207 kubelet[2582]: W0513 23:48:47.252196 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:48:47.253086 kubelet[2582]: E0513 23:48:47.252214 2582 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:48:47.253086 kubelet[2582]: E0513 23:48:47.252719 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:48:47.253086 kubelet[2582]: W0513 23:48:47.252731 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:48:47.253086 kubelet[2582]: E0513 23:48:47.252775 2582 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:48:47.255133 kubelet[2582]: E0513 23:48:47.255107 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:48:47.255133 kubelet[2582]: W0513 23:48:47.255126 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:48:47.255133 kubelet[2582]: E0513 23:48:47.255140 2582 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:48:47.255477 kubelet[2582]: E0513 23:48:47.255436 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:48:47.255477 kubelet[2582]: W0513 23:48:47.255447 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:48:47.255477 kubelet[2582]: E0513 23:48:47.255457 2582 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:48:47.255786 kubelet[2582]: E0513 23:48:47.255674 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:48:47.255786 kubelet[2582]: W0513 23:48:47.255684 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:48:47.255786 kubelet[2582]: E0513 23:48:47.255697 2582 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:48:47.256384 kubelet[2582]: E0513 23:48:47.256013 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:48:47.256384 kubelet[2582]: W0513 23:48:47.256028 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:48:47.256384 kubelet[2582]: E0513 23:48:47.256039 2582 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:48:47.256384 kubelet[2582]: E0513 23:48:47.256223 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:48:47.256384 kubelet[2582]: W0513 23:48:47.256233 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:48:47.256384 kubelet[2582]: E0513 23:48:47.256242 2582 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:48:47.256384 kubelet[2582]: E0513 23:48:47.256444 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:48:47.256384 kubelet[2582]: W0513 23:48:47.256454 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:48:47.256384 kubelet[2582]: E0513 23:48:47.256464 2582 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:48:47.256384 kubelet[2582]: E0513 23:48:47.256667 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:48:47.257700 kubelet[2582]: W0513 23:48:47.256678 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:48:47.257700 kubelet[2582]: E0513 23:48:47.256695 2582 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:48:47.260488 kubelet[2582]: E0513 23:48:47.260459 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:48:47.260488 kubelet[2582]: W0513 23:48:47.260481 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:48:47.260623 kubelet[2582]: E0513 23:48:47.260497 2582 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:48:47.261502 kubelet[2582]: E0513 23:48:47.261308 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:48:47.261502 kubelet[2582]: W0513 23:48:47.261324 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:48:47.261502 kubelet[2582]: E0513 23:48:47.261335 2582 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:48:47.264241 kubelet[2582]: E0513 23:48:47.264219 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:48:47.264241 kubelet[2582]: W0513 23:48:47.264236 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:48:47.264377 kubelet[2582]: E0513 23:48:47.264250 2582 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:48:47.264619 kubelet[2582]: E0513 23:48:47.264606 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:48:47.264619 kubelet[2582]: W0513 23:48:47.264617 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:48:47.264731 kubelet[2582]: E0513 23:48:47.264627 2582 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:48:47.264828 kubelet[2582]: E0513 23:48:47.264817 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:48:47.264866 kubelet[2582]: W0513 23:48:47.264828 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:48:47.264866 kubelet[2582]: E0513 23:48:47.264840 2582 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:48:47.265062 kubelet[2582]: E0513 23:48:47.265038 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:48:47.265062 kubelet[2582]: W0513 23:48:47.265048 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:48:47.265062 kubelet[2582]: E0513 23:48:47.265056 2582 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:48:47.455403 containerd[1479]: time="2025-05-13T23:48:47.455345744Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:48:47.456580 containerd[1479]: time="2025-05-13T23:48:47.456510132Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3: active requests=0, bytes read=5122903" May 13 23:48:47.457656 containerd[1479]: time="2025-05-13T23:48:47.457624949Z" level=info msg="ImageCreate event name:\"sha256:dd8e710a588cc6f5834c4d84f7e12458efae593d3dfe527ca9e757c89239ecb8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:48:47.460046 containerd[1479]: time="2025-05-13T23:48:47.460014339Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:48:47.461012 containerd[1479]: time="2025-05-13T23:48:47.460974520Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" with image id \"sha256:dd8e710a588cc6f5834c4d84f7e12458efae593d3dfe527ca9e757c89239ecb8\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\", size \"6492045\" in 1.027377582s" May 13 23:48:47.461051 containerd[1479]: time="2025-05-13T23:48:47.461014690Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" returns image reference \"sha256:dd8e710a588cc6f5834c4d84f7e12458efae593d3dfe527ca9e757c89239ecb8\"" May 13 23:48:47.463087 containerd[1479]: time="2025-05-13T23:48:47.463039916Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\"" May 13 23:48:47.465197 containerd[1479]: time="2025-05-13T23:48:47.465162765Z" level=info msg="CreateContainer within sandbox \"ae00e4fc8b21b2932e4b257276e555b217f638571b05014caac4b81e8afb5e06\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 13 23:48:47.481493 containerd[1479]: time="2025-05-13T23:48:47.481449117Z" level=info msg="Container 02ebd49c9df88a6234a53eb7530d18a1be7ca6851861260c7d2b5d4c9aa11be7: CDI devices from CRI Config.CDIDevices: []" May 13 23:48:47.501058 containerd[1479]: time="2025-05-13T23:48:47.501011303Z" level=info msg="CreateContainer within sandbox \"ae00e4fc8b21b2932e4b257276e555b217f638571b05014caac4b81e8afb5e06\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"02ebd49c9df88a6234a53eb7530d18a1be7ca6851861260c7d2b5d4c9aa11be7\"" May 13 23:48:47.501741 containerd[1479]: time="2025-05-13T23:48:47.501712224Z" level=info msg="StartContainer for \"02ebd49c9df88a6234a53eb7530d18a1be7ca6851861260c7d2b5d4c9aa11be7\"" May 13 23:48:47.503509 containerd[1479]: time="2025-05-13T23:48:47.503413816Z" level=info msg="connecting to shim 02ebd49c9df88a6234a53eb7530d18a1be7ca6851861260c7d2b5d4c9aa11be7" address="unix:///run/containerd/s/f6ed45c990143c30436c20c48b620b03ee45800586ff217d6037bba229d72f08" protocol=ttrpc version=3 May 13 23:48:47.526931 systemd[1]: Started cri-containerd-02ebd49c9df88a6234a53eb7530d18a1be7ca6851861260c7d2b5d4c9aa11be7.scope - libcontainer container 02ebd49c9df88a6234a53eb7530d18a1be7ca6851861260c7d2b5d4c9aa11be7. May 13 23:48:47.610462 containerd[1479]: time="2025-05-13T23:48:47.610378816Z" level=info msg="StartContainer for \"02ebd49c9df88a6234a53eb7530d18a1be7ca6851861260c7d2b5d4c9aa11be7\" returns successfully" May 13 23:48:47.624079 systemd[1]: cri-containerd-02ebd49c9df88a6234a53eb7530d18a1be7ca6851861260c7d2b5d4c9aa11be7.scope: Deactivated successfully. May 13 23:48:47.644524 containerd[1479]: time="2025-05-13T23:48:47.644446064Z" level=info msg="TaskExit event in podsandbox handler container_id:\"02ebd49c9df88a6234a53eb7530d18a1be7ca6851861260c7d2b5d4c9aa11be7\" id:\"02ebd49c9df88a6234a53eb7530d18a1be7ca6851861260c7d2b5d4c9aa11be7\" pid:3165 exited_at:{seconds:1747180127 nanos:636915049}" May 13 23:48:47.648964 containerd[1479]: time="2025-05-13T23:48:47.648901250Z" level=info msg="received exit event container_id:\"02ebd49c9df88a6234a53eb7530d18a1be7ca6851861260c7d2b5d4c9aa11be7\" id:\"02ebd49c9df88a6234a53eb7530d18a1be7ca6851861260c7d2b5d4c9aa11be7\" pid:3165 exited_at:{seconds:1747180127 nanos:636915049}" May 13 23:48:47.695197 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-02ebd49c9df88a6234a53eb7530d18a1be7ca6851861260c7d2b5d4c9aa11be7-rootfs.mount: Deactivated successfully. May 13 23:48:48.179884 kubelet[2582]: E0513 23:48:48.179824 2582 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wbtkz" podUID="e19e4307-fe13-490d-a3e6-6829c87953d9" May 13 23:48:48.947645 containerd[1479]: time="2025-05-13T23:48:48.947587640Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:48:48.948696 containerd[1479]: time="2025-05-13T23:48:48.948457110Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.3: active requests=0, bytes read=28370571" May 13 23:48:48.949462 containerd[1479]: time="2025-05-13T23:48:48.949428843Z" level=info msg="ImageCreate event name:\"sha256:26e730979a07ea7452715da6ac48076016018bc982c06ebd32d5e095f42d3d54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:48:48.951954 containerd[1479]: time="2025-05-13T23:48:48.951923468Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:48:48.952661 containerd[1479]: time="2025-05-13T23:48:48.952622781Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.3\" with image id \"sha256:26e730979a07ea7452715da6ac48076016018bc982c06ebd32d5e095f42d3d54\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\", size \"29739745\" in 1.489521331s" May 13 23:48:48.952738 containerd[1479]: time="2025-05-13T23:48:48.952672912Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\" returns image reference \"sha256:26e730979a07ea7452715da6ac48076016018bc982c06ebd32d5e095f42d3d54\"" May 13 23:48:48.953738 containerd[1479]: time="2025-05-13T23:48:48.953656167Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\"" May 13 23:48:48.968256 containerd[1479]: time="2025-05-13T23:48:48.968195307Z" level=info msg="CreateContainer within sandbox \"6d9aed5f15bbbab9d38ca1248eb2807fb06367fea5902fec83d6a113143201f6\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" May 13 23:48:48.974980 containerd[1479]: time="2025-05-13T23:48:48.974513129Z" level=info msg="Container f784e9cc2c3aba8c85a4392b9b75fc3b809b540fb4134eda9198a3c2434dc43f: CDI devices from CRI Config.CDIDevices: []" May 13 23:48:48.981435 containerd[1479]: time="2025-05-13T23:48:48.981386113Z" level=info msg="CreateContainer within sandbox \"6d9aed5f15bbbab9d38ca1248eb2807fb06367fea5902fec83d6a113143201f6\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"f784e9cc2c3aba8c85a4392b9b75fc3b809b540fb4134eda9198a3c2434dc43f\"" May 13 23:48:48.983176 containerd[1479]: time="2025-05-13T23:48:48.982126595Z" level=info msg="StartContainer for \"f784e9cc2c3aba8c85a4392b9b75fc3b809b540fb4134eda9198a3c2434dc43f\"" May 13 23:48:48.983956 containerd[1479]: time="2025-05-13T23:48:48.983929389Z" level=info msg="connecting to shim f784e9cc2c3aba8c85a4392b9b75fc3b809b540fb4134eda9198a3c2434dc43f" address="unix:///run/containerd/s/ad3be3c484cf24c056921c56607d7ab0391dd1dbefe79ef3690ccd60361a6274" protocol=ttrpc version=3 May 13 23:48:49.008739 systemd[1]: Started cri-containerd-f784e9cc2c3aba8c85a4392b9b75fc3b809b540fb4134eda9198a3c2434dc43f.scope - libcontainer container f784e9cc2c3aba8c85a4392b9b75fc3b809b540fb4134eda9198a3c2434dc43f. May 13 23:48:49.122632 containerd[1479]: time="2025-05-13T23:48:49.122541778Z" level=info msg="StartContainer for \"f784e9cc2c3aba8c85a4392b9b75fc3b809b540fb4134eda9198a3c2434dc43f\" returns successfully" May 13 23:48:49.279967 kubelet[2582]: I0513 23:48:49.279512 2582 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5fb67b9d-pwmv4" podStartSLOduration=1.769212019 podStartE2EDuration="4.279495238s" podCreationTimestamp="2025-05-13 23:48:45 +0000 UTC" firstStartedPulling="2025-05-13 23:48:46.44324564 +0000 UTC m=+13.377131514" lastFinishedPulling="2025-05-13 23:48:48.953528899 +0000 UTC m=+15.887414733" observedRunningTime="2025-05-13 23:48:49.279197296 +0000 UTC m=+16.213083170" watchObservedRunningTime="2025-05-13 23:48:49.279495238 +0000 UTC m=+16.213381112" May 13 23:48:50.112667 update_engine[1459]: I20250513 23:48:50.112597 1459 update_attempter.cc:509] Updating boot flags... May 13 23:48:50.180715 kubelet[2582]: E0513 23:48:50.180203 2582 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wbtkz" podUID="e19e4307-fe13-490d-a3e6-6829c87953d9" May 13 23:48:50.218833 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (3254) May 13 23:48:50.275577 kubelet[2582]: I0513 23:48:50.274940 2582 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 13 23:48:50.281768 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (3258) May 13 23:48:50.309668 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (3258) May 13 23:48:52.180382 kubelet[2582]: E0513 23:48:52.179952 2582 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wbtkz" podUID="e19e4307-fe13-490d-a3e6-6829c87953d9" May 13 23:48:52.873637 containerd[1479]: time="2025-05-13T23:48:52.873578735Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:48:52.874302 containerd[1479]: time="2025-05-13T23:48:52.874233333Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.3: active requests=0, bytes read=91256270" May 13 23:48:52.875130 containerd[1479]: time="2025-05-13T23:48:52.875095167Z" level=info msg="ImageCreate event name:\"sha256:add6372545fb406bb017769f222d84c50549ce13e3b19f1fbaee3d8a4aaef627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:48:52.877859 containerd[1479]: time="2025-05-13T23:48:52.877818695Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:48:52.878501 containerd[1479]: time="2025-05-13T23:48:52.878464370Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.3\" with image id \"sha256:add6372545fb406bb017769f222d84c50549ce13e3b19f1fbaee3d8a4aaef627\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\", size \"92625452\" in 3.924770316s" May 13 23:48:52.878534 containerd[1479]: time="2025-05-13T23:48:52.878502417Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\" returns image reference \"sha256:add6372545fb406bb017769f222d84c50549ce13e3b19f1fbaee3d8a4aaef627\"" May 13 23:48:52.881870 containerd[1479]: time="2025-05-13T23:48:52.881818371Z" level=info msg="CreateContainer within sandbox \"ae00e4fc8b21b2932e4b257276e555b217f638571b05014caac4b81e8afb5e06\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 13 23:48:52.893602 containerd[1479]: time="2025-05-13T23:48:52.892888873Z" level=info msg="Container f787aa88b691e92ab2707eaaafcb534853b37c942738e32ffa327e6ef7e5aa58: CDI devices from CRI Config.CDIDevices: []" May 13 23:48:52.909576 containerd[1479]: time="2025-05-13T23:48:52.909510609Z" level=info msg="CreateContainer within sandbox \"ae00e4fc8b21b2932e4b257276e555b217f638571b05014caac4b81e8afb5e06\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"f787aa88b691e92ab2707eaaafcb534853b37c942738e32ffa327e6ef7e5aa58\"" May 13 23:48:52.911630 containerd[1479]: time="2025-05-13T23:48:52.911596303Z" level=info msg="StartContainer for \"f787aa88b691e92ab2707eaaafcb534853b37c942738e32ffa327e6ef7e5aa58\"" May 13 23:48:52.913298 containerd[1479]: time="2025-05-13T23:48:52.913223914Z" level=info msg="connecting to shim f787aa88b691e92ab2707eaaafcb534853b37c942738e32ffa327e6ef7e5aa58" address="unix:///run/containerd/s/f6ed45c990143c30436c20c48b620b03ee45800586ff217d6037bba229d72f08" protocol=ttrpc version=3 May 13 23:48:52.943779 systemd[1]: Started cri-containerd-f787aa88b691e92ab2707eaaafcb534853b37c942738e32ffa327e6ef7e5aa58.scope - libcontainer container f787aa88b691e92ab2707eaaafcb534853b37c942738e32ffa327e6ef7e5aa58. May 13 23:48:52.980436 containerd[1479]: time="2025-05-13T23:48:52.980383900Z" level=info msg="StartContainer for \"f787aa88b691e92ab2707eaaafcb534853b37c942738e32ffa327e6ef7e5aa58\" returns successfully" May 13 23:48:53.625735 systemd[1]: cri-containerd-f787aa88b691e92ab2707eaaafcb534853b37c942738e32ffa327e6ef7e5aa58.scope: Deactivated successfully. May 13 23:48:53.626064 systemd[1]: cri-containerd-f787aa88b691e92ab2707eaaafcb534853b37c942738e32ffa327e6ef7e5aa58.scope: Consumed 507ms CPU time, 158.2M memory peak, 4K read from disk, 150.3M written to disk. May 13 23:48:53.628705 containerd[1479]: time="2025-05-13T23:48:53.627211806Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f787aa88b691e92ab2707eaaafcb534853b37c942738e32ffa327e6ef7e5aa58\" id:\"f787aa88b691e92ab2707eaaafcb534853b37c942738e32ffa327e6ef7e5aa58\" pid:3282 exited_at:{seconds:1747180133 nanos:626837142}" May 13 23:48:53.628705 containerd[1479]: time="2025-05-13T23:48:53.627291420Z" level=info msg="received exit event container_id:\"f787aa88b691e92ab2707eaaafcb534853b37c942738e32ffa327e6ef7e5aa58\" id:\"f787aa88b691e92ab2707eaaafcb534853b37c942738e32ffa327e6ef7e5aa58\" pid:3282 exited_at:{seconds:1747180133 nanos:626837142}" May 13 23:48:53.653790 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f787aa88b691e92ab2707eaaafcb534853b37c942738e32ffa327e6ef7e5aa58-rootfs.mount: Deactivated successfully. May 13 23:48:53.655472 kubelet[2582]: I0513 23:48:53.655163 2582 kubelet_node_status.go:502] "Fast updating node status as it just became ready" May 13 23:48:53.749068 systemd[1]: Created slice kubepods-besteffort-pod9662eb3c_494a_430d_876b_4282439856af.slice - libcontainer container kubepods-besteffort-pod9662eb3c_494a_430d_876b_4282439856af.slice. May 13 23:48:53.756315 systemd[1]: Created slice kubepods-besteffort-pod899bb66a_cb95_4fa5_8ad5_a9c8f93b8668.slice - libcontainer container kubepods-besteffort-pod899bb66a_cb95_4fa5_8ad5_a9c8f93b8668.slice. May 13 23:48:53.762394 systemd[1]: Created slice kubepods-burstable-pode5dd69a2_e3f1_4b50_820a_67e59e00cb88.slice - libcontainer container kubepods-burstable-pode5dd69a2_e3f1_4b50_820a_67e59e00cb88.slice. May 13 23:48:53.772608 systemd[1]: Created slice kubepods-besteffort-podded66252_e36f_4904_9b47_67460f4a88c7.slice - libcontainer container kubepods-besteffort-podded66252_e36f_4904_9b47_67460f4a88c7.slice. May 13 23:48:53.781279 systemd[1]: Created slice kubepods-burstable-pod11e30734_8276_4815_bf03_40af4e03d3a6.slice - libcontainer container kubepods-burstable-pod11e30734_8276_4815_bf03_40af4e03d3a6.slice. May 13 23:48:53.814127 kubelet[2582]: I0513 23:48:53.813907 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/ded66252-e36f-4904-9b47-67460f4a88c7-calico-apiserver-certs\") pod \"calico-apiserver-6446b7dc94-fscxf\" (UID: \"ded66252-e36f-4904-9b47-67460f4a88c7\") " pod="calico-apiserver/calico-apiserver-6446b7dc94-fscxf" May 13 23:48:53.814127 kubelet[2582]: I0513 23:48:53.813968 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e5dd69a2-e3f1-4b50-820a-67e59e00cb88-config-volume\") pod \"coredns-668d6bf9bc-qqgk2\" (UID: \"e5dd69a2-e3f1-4b50-820a-67e59e00cb88\") " pod="kube-system/coredns-668d6bf9bc-qqgk2" May 13 23:48:53.814127 kubelet[2582]: I0513 23:48:53.813993 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/899bb66a-cb95-4fa5-8ad5-a9c8f93b8668-calico-apiserver-certs\") pod \"calico-apiserver-6446b7dc94-zh6x9\" (UID: \"899bb66a-cb95-4fa5-8ad5-a9c8f93b8668\") " pod="calico-apiserver/calico-apiserver-6446b7dc94-zh6x9" May 13 23:48:53.814127 kubelet[2582]: I0513 23:48:53.814011 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7m67t\" (UniqueName: \"kubernetes.io/projected/899bb66a-cb95-4fa5-8ad5-a9c8f93b8668-kube-api-access-7m67t\") pod \"calico-apiserver-6446b7dc94-zh6x9\" (UID: \"899bb66a-cb95-4fa5-8ad5-a9c8f93b8668\") " pod="calico-apiserver/calico-apiserver-6446b7dc94-zh6x9" May 13 23:48:53.814127 kubelet[2582]: I0513 23:48:53.814032 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvdkq\" (UniqueName: \"kubernetes.io/projected/e5dd69a2-e3f1-4b50-820a-67e59e00cb88-kube-api-access-xvdkq\") pod \"coredns-668d6bf9bc-qqgk2\" (UID: \"e5dd69a2-e3f1-4b50-820a-67e59e00cb88\") " pod="kube-system/coredns-668d6bf9bc-qqgk2" May 13 23:48:53.814459 kubelet[2582]: I0513 23:48:53.814054 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9662eb3c-494a-430d-876b-4282439856af-tigera-ca-bundle\") pod \"calico-kube-controllers-77c58f7969-87dkr\" (UID: \"9662eb3c-494a-430d-876b-4282439856af\") " pod="calico-system/calico-kube-controllers-77c58f7969-87dkr" May 13 23:48:53.814459 kubelet[2582]: I0513 23:48:53.814077 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2mzx4\" (UniqueName: \"kubernetes.io/projected/9662eb3c-494a-430d-876b-4282439856af-kube-api-access-2mzx4\") pod \"calico-kube-controllers-77c58f7969-87dkr\" (UID: \"9662eb3c-494a-430d-876b-4282439856af\") " pod="calico-system/calico-kube-controllers-77c58f7969-87dkr" May 13 23:48:53.814459 kubelet[2582]: I0513 23:48:53.814151 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/11e30734-8276-4815-bf03-40af4e03d3a6-config-volume\") pod \"coredns-668d6bf9bc-hm7hx\" (UID: \"11e30734-8276-4815-bf03-40af4e03d3a6\") " pod="kube-system/coredns-668d6bf9bc-hm7hx" May 13 23:48:53.814459 kubelet[2582]: I0513 23:48:53.814185 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7k6bl\" (UniqueName: \"kubernetes.io/projected/11e30734-8276-4815-bf03-40af4e03d3a6-kube-api-access-7k6bl\") pod \"coredns-668d6bf9bc-hm7hx\" (UID: \"11e30734-8276-4815-bf03-40af4e03d3a6\") " pod="kube-system/coredns-668d6bf9bc-hm7hx" May 13 23:48:53.814459 kubelet[2582]: I0513 23:48:53.814210 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ck8lx\" (UniqueName: \"kubernetes.io/projected/ded66252-e36f-4904-9b47-67460f4a88c7-kube-api-access-ck8lx\") pod \"calico-apiserver-6446b7dc94-fscxf\" (UID: \"ded66252-e36f-4904-9b47-67460f4a88c7\") " pod="calico-apiserver/calico-apiserver-6446b7dc94-fscxf" May 13 23:48:54.053921 containerd[1479]: time="2025-05-13T23:48:54.053862909Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-77c58f7969-87dkr,Uid:9662eb3c-494a-430d-876b-4282439856af,Namespace:calico-system,Attempt:0,}" May 13 23:48:54.060545 containerd[1479]: time="2025-05-13T23:48:54.060476866Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6446b7dc94-zh6x9,Uid:899bb66a-cb95-4fa5-8ad5-a9c8f93b8668,Namespace:calico-apiserver,Attempt:0,}" May 13 23:48:54.068327 containerd[1479]: time="2025-05-13T23:48:54.068224287Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qqgk2,Uid:e5dd69a2-e3f1-4b50-820a-67e59e00cb88,Namespace:kube-system,Attempt:0,}" May 13 23:48:54.080501 containerd[1479]: time="2025-05-13T23:48:54.080440435Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6446b7dc94-fscxf,Uid:ded66252-e36f-4904-9b47-67460f4a88c7,Namespace:calico-apiserver,Attempt:0,}" May 13 23:48:54.095665 containerd[1479]: time="2025-05-13T23:48:54.092750238Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-hm7hx,Uid:11e30734-8276-4815-bf03-40af4e03d3a6,Namespace:kube-system,Attempt:0,}" May 13 23:48:54.299613 systemd[1]: Created slice kubepods-besteffort-pode19e4307_fe13_490d_a3e6_6829c87953d9.slice - libcontainer container kubepods-besteffort-pode19e4307_fe13_490d_a3e6_6829c87953d9.slice. May 13 23:48:54.333798 containerd[1479]: time="2025-05-13T23:48:54.328150549Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\"" May 13 23:48:54.350434 containerd[1479]: time="2025-05-13T23:48:54.347262619Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wbtkz,Uid:e19e4307-fe13-490d-a3e6-6829c87953d9,Namespace:calico-system,Attempt:0,}" May 13 23:48:54.949755 containerd[1479]: time="2025-05-13T23:48:54.949676181Z" level=error msg="Failed to destroy network for sandbox \"203593670e731af7d099f508f65554fdbd1b5b41c99305a4f0f34cb888643189\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:48:54.953198 systemd[1]: run-netns-cni\x2dc5667ece\x2da39d\x2db059\x2d9e61\x2d0587847e14df.mount: Deactivated successfully. May 13 23:48:54.959045 containerd[1479]: time="2025-05-13T23:48:54.958706530Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wbtkz,Uid:e19e4307-fe13-490d-a3e6-6829c87953d9,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"203593670e731af7d099f508f65554fdbd1b5b41c99305a4f0f34cb888643189\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:48:54.959392 kubelet[2582]: E0513 23:48:54.959337 2582 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"203593670e731af7d099f508f65554fdbd1b5b41c99305a4f0f34cb888643189\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:48:54.962865 kubelet[2582]: E0513 23:48:54.962722 2582 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"203593670e731af7d099f508f65554fdbd1b5b41c99305a4f0f34cb888643189\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-wbtkz" May 13 23:48:54.962865 kubelet[2582]: E0513 23:48:54.962792 2582 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"203593670e731af7d099f508f65554fdbd1b5b41c99305a4f0f34cb888643189\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-wbtkz" May 13 23:48:54.966114 containerd[1479]: time="2025-05-13T23:48:54.966064048Z" level=error msg="Failed to destroy network for sandbox \"21fa77168f6b0f162a2d731c98a0826e5382a621ed24726d5f338eaa6ba28af8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:48:54.968074 systemd[1]: run-netns-cni\x2dbbd0b271\x2d2437\x2dab42\x2d8058\x2d628ed78f317c.mount: Deactivated successfully. May 13 23:48:54.969294 kubelet[2582]: E0513 23:48:54.969230 2582 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-wbtkz_calico-system(e19e4307-fe13-490d-a3e6-6829c87953d9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-wbtkz_calico-system(e19e4307-fe13-490d-a3e6-6829c87953d9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"203593670e731af7d099f508f65554fdbd1b5b41c99305a4f0f34cb888643189\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-wbtkz" podUID="e19e4307-fe13-490d-a3e6-6829c87953d9" May 13 23:48:54.970394 containerd[1479]: time="2025-05-13T23:48:54.970247249Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-77c58f7969-87dkr,Uid:9662eb3c-494a-430d-876b-4282439856af,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"21fa77168f6b0f162a2d731c98a0826e5382a621ed24726d5f338eaa6ba28af8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:48:54.971213 kubelet[2582]: E0513 23:48:54.970531 2582 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"21fa77168f6b0f162a2d731c98a0826e5382a621ed24726d5f338eaa6ba28af8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:48:54.971213 kubelet[2582]: E0513 23:48:54.970640 2582 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"21fa77168f6b0f162a2d731c98a0826e5382a621ed24726d5f338eaa6ba28af8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-77c58f7969-87dkr" May 13 23:48:54.971213 kubelet[2582]: E0513 23:48:54.970661 2582 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"21fa77168f6b0f162a2d731c98a0826e5382a621ed24726d5f338eaa6ba28af8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-77c58f7969-87dkr" May 13 23:48:54.972013 kubelet[2582]: E0513 23:48:54.970776 2582 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-77c58f7969-87dkr_calico-system(9662eb3c-494a-430d-876b-4282439856af)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-77c58f7969-87dkr_calico-system(9662eb3c-494a-430d-876b-4282439856af)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"21fa77168f6b0f162a2d731c98a0826e5382a621ed24726d5f338eaa6ba28af8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-77c58f7969-87dkr" podUID="9662eb3c-494a-430d-876b-4282439856af" May 13 23:48:54.973972 containerd[1479]: time="2025-05-13T23:48:54.972194686Z" level=error msg="Failed to destroy network for sandbox \"e3db65c41b6aff9639031633c200db80b21deddec61eff2713ca1dfc0e92447d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:48:54.973972 containerd[1479]: time="2025-05-13T23:48:54.972870396Z" level=error msg="Failed to destroy network for sandbox \"aba47c5d1a8a026805601912774e6783ac8de3fb5fc9318f6b3be8d0493e27d9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:48:54.975927 systemd[1]: run-netns-cni\x2db84f17af\x2da656\x2d79d2\x2dedba\x2d83f32d0b65bd.mount: Deactivated successfully. May 13 23:48:54.980294 systemd[1]: run-netns-cni\x2d09197f00\x2dc12e\x2d937b\x2dddb6\x2de6798d238266.mount: Deactivated successfully. May 13 23:48:54.986011 containerd[1479]: time="2025-05-13T23:48:54.985856749Z" level=error msg="Failed to destroy network for sandbox \"d09c470cc0e55152ffc7579c6153f7cf8024f631128ce5cd2a6978d32abc8946\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:48:54.988709 containerd[1479]: time="2025-05-13T23:48:54.988645443Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-hm7hx,Uid:11e30734-8276-4815-bf03-40af4e03d3a6,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e3db65c41b6aff9639031633c200db80b21deddec61eff2713ca1dfc0e92447d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:48:54.989175 kubelet[2582]: E0513 23:48:54.988947 2582 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e3db65c41b6aff9639031633c200db80b21deddec61eff2713ca1dfc0e92447d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:48:54.989175 kubelet[2582]: E0513 23:48:54.989106 2582 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e3db65c41b6aff9639031633c200db80b21deddec61eff2713ca1dfc0e92447d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-hm7hx" May 13 23:48:54.989175 kubelet[2582]: E0513 23:48:54.989136 2582 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e3db65c41b6aff9639031633c200db80b21deddec61eff2713ca1dfc0e92447d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-hm7hx" May 13 23:48:54.989661 kubelet[2582]: E0513 23:48:54.989182 2582 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-hm7hx_kube-system(11e30734-8276-4815-bf03-40af4e03d3a6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-hm7hx_kube-system(11e30734-8276-4815-bf03-40af4e03d3a6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e3db65c41b6aff9639031633c200db80b21deddec61eff2713ca1dfc0e92447d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-hm7hx" podUID="11e30734-8276-4815-bf03-40af4e03d3a6" May 13 23:48:54.991151 containerd[1479]: time="2025-05-13T23:48:54.991098602Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6446b7dc94-zh6x9,Uid:899bb66a-cb95-4fa5-8ad5-a9c8f93b8668,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"aba47c5d1a8a026805601912774e6783ac8de3fb5fc9318f6b3be8d0493e27d9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:48:54.991439 kubelet[2582]: E0513 23:48:54.991411 2582 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aba47c5d1a8a026805601912774e6783ac8de3fb5fc9318f6b3be8d0493e27d9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:48:54.991689 kubelet[2582]: E0513 23:48:54.991667 2582 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aba47c5d1a8a026805601912774e6783ac8de3fb5fc9318f6b3be8d0493e27d9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6446b7dc94-zh6x9" May 13 23:48:54.991816 kubelet[2582]: E0513 23:48:54.991795 2582 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aba47c5d1a8a026805601912774e6783ac8de3fb5fc9318f6b3be8d0493e27d9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6446b7dc94-zh6x9" May 13 23:48:54.991979 kubelet[2582]: E0513 23:48:54.991927 2582 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6446b7dc94-zh6x9_calico-apiserver(899bb66a-cb95-4fa5-8ad5-a9c8f93b8668)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6446b7dc94-zh6x9_calico-apiserver(899bb66a-cb95-4fa5-8ad5-a9c8f93b8668)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"aba47c5d1a8a026805601912774e6783ac8de3fb5fc9318f6b3be8d0493e27d9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6446b7dc94-zh6x9" podUID="899bb66a-cb95-4fa5-8ad5-a9c8f93b8668" May 13 23:48:54.993088 containerd[1479]: time="2025-05-13T23:48:54.993042238Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qqgk2,Uid:e5dd69a2-e3f1-4b50-820a-67e59e00cb88,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d09c470cc0e55152ffc7579c6153f7cf8024f631128ce5cd2a6978d32abc8946\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:48:54.993438 kubelet[2582]: E0513 23:48:54.993311 2582 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d09c470cc0e55152ffc7579c6153f7cf8024f631128ce5cd2a6978d32abc8946\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:48:54.993438 kubelet[2582]: E0513 23:48:54.993369 2582 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d09c470cc0e55152ffc7579c6153f7cf8024f631128ce5cd2a6978d32abc8946\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-qqgk2" May 13 23:48:54.993438 kubelet[2582]: E0513 23:48:54.993386 2582 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d09c470cc0e55152ffc7579c6153f7cf8024f631128ce5cd2a6978d32abc8946\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-qqgk2" May 13 23:48:54.994042 kubelet[2582]: E0513 23:48:54.993450 2582 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-qqgk2_kube-system(e5dd69a2-e3f1-4b50-820a-67e59e00cb88)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-qqgk2_kube-system(e5dd69a2-e3f1-4b50-820a-67e59e00cb88)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d09c470cc0e55152ffc7579c6153f7cf8024f631128ce5cd2a6978d32abc8946\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-qqgk2" podUID="e5dd69a2-e3f1-4b50-820a-67e59e00cb88" May 13 23:48:55.003993 containerd[1479]: time="2025-05-13T23:48:55.003941947Z" level=error msg="Failed to destroy network for sandbox \"c82af16e585ced17cc35962a970c957caed5f65c1c36286a24c1aafcbaab1b09\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:48:55.004877 containerd[1479]: time="2025-05-13T23:48:55.004826965Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6446b7dc94-fscxf,Uid:ded66252-e36f-4904-9b47-67460f4a88c7,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c82af16e585ced17cc35962a970c957caed5f65c1c36286a24c1aafcbaab1b09\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:48:55.005173 kubelet[2582]: E0513 23:48:55.005114 2582 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c82af16e585ced17cc35962a970c957caed5f65c1c36286a24c1aafcbaab1b09\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:48:55.005254 kubelet[2582]: E0513 23:48:55.005191 2582 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c82af16e585ced17cc35962a970c957caed5f65c1c36286a24c1aafcbaab1b09\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6446b7dc94-fscxf" May 13 23:48:55.005254 kubelet[2582]: E0513 23:48:55.005217 2582 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c82af16e585ced17cc35962a970c957caed5f65c1c36286a24c1aafcbaab1b09\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6446b7dc94-fscxf" May 13 23:48:55.005322 kubelet[2582]: E0513 23:48:55.005282 2582 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6446b7dc94-fscxf_calico-apiserver(ded66252-e36f-4904-9b47-67460f4a88c7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6446b7dc94-fscxf_calico-apiserver(ded66252-e36f-4904-9b47-67460f4a88c7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c82af16e585ced17cc35962a970c957caed5f65c1c36286a24c1aafcbaab1b09\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6446b7dc94-fscxf" podUID="ded66252-e36f-4904-9b47-67460f4a88c7" May 13 23:48:55.926937 systemd[1]: run-netns-cni\x2d4c101d39\x2d7493\x2d2afd\x2dc028\x2d72f6f6c7abc2.mount: Deactivated successfully. May 13 23:48:55.927023 systemd[1]: run-netns-cni\x2d900d7d13\x2db272\x2d5f2a\x2d946e\x2d6727bd6738cf.mount: Deactivated successfully. May 13 23:48:57.362541 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount733481898.mount: Deactivated successfully. May 13 23:48:57.617584 containerd[1479]: time="2025-05-13T23:48:57.617248187Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:48:57.618478 containerd[1479]: time="2025-05-13T23:48:57.618293655Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.3: active requests=0, bytes read=138981893" May 13 23:48:57.619652 containerd[1479]: time="2025-05-13T23:48:57.619442498Z" level=info msg="ImageCreate event name:\"sha256:cdcce3ec4624a24c28cdc07b0ee29ddf6703628edee7452a3f8a8b4816bfd057\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:48:57.629561 containerd[1479]: time="2025-05-13T23:48:57.629509567Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.3\" with image id \"sha256:cdcce3ec4624a24c28cdc07b0ee29ddf6703628edee7452a3f8a8b4816bfd057\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\", size \"138981755\" in 3.301311611s" May 13 23:48:57.629561 containerd[1479]: time="2025-05-13T23:48:57.629567295Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\" returns image reference \"sha256:cdcce3ec4624a24c28cdc07b0ee29ddf6703628edee7452a3f8a8b4816bfd057\"" May 13 23:48:57.636584 containerd[1479]: time="2025-05-13T23:48:57.636525402Z" level=info msg="CreateContainer within sandbox \"ae00e4fc8b21b2932e4b257276e555b217f638571b05014caac4b81e8afb5e06\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" May 13 23:48:57.640453 containerd[1479]: time="2025-05-13T23:48:57.640387911Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:48:57.649379 containerd[1479]: time="2025-05-13T23:48:57.646799700Z" level=info msg="Container 8b9c11762dcd36cd573098c860cf32ed9b23f719a0f7b22b09a81fe0be3d5444: CDI devices from CRI Config.CDIDevices: []" May 13 23:48:57.650797 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3840884674.mount: Deactivated successfully. May 13 23:48:57.665353 containerd[1479]: time="2025-05-13T23:48:57.665283403Z" level=info msg="CreateContainer within sandbox \"ae00e4fc8b21b2932e4b257276e555b217f638571b05014caac4b81e8afb5e06\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"8b9c11762dcd36cd573098c860cf32ed9b23f719a0f7b22b09a81fe0be3d5444\"" May 13 23:48:57.665887 containerd[1479]: time="2025-05-13T23:48:57.665841683Z" level=info msg="StartContainer for \"8b9c11762dcd36cd573098c860cf32ed9b23f719a0f7b22b09a81fe0be3d5444\"" May 13 23:48:57.667426 containerd[1479]: time="2025-05-13T23:48:57.667373660Z" level=info msg="connecting to shim 8b9c11762dcd36cd573098c860cf32ed9b23f719a0f7b22b09a81fe0be3d5444" address="unix:///run/containerd/s/f6ed45c990143c30436c20c48b620b03ee45800586ff217d6037bba229d72f08" protocol=ttrpc version=3 May 13 23:48:57.687805 systemd[1]: Started cri-containerd-8b9c11762dcd36cd573098c860cf32ed9b23f719a0f7b22b09a81fe0be3d5444.scope - libcontainer container 8b9c11762dcd36cd573098c860cf32ed9b23f719a0f7b22b09a81fe0be3d5444. May 13 23:48:57.732255 containerd[1479]: time="2025-05-13T23:48:57.732185777Z" level=info msg="StartContainer for \"8b9c11762dcd36cd573098c860cf32ed9b23f719a0f7b22b09a81fe0be3d5444\" returns successfully" May 13 23:48:57.929585 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. May 13 23:48:57.929734 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. May 13 23:48:59.343671 kubelet[2582]: I0513 23:48:59.343631 2582 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 13 23:49:03.289861 systemd[1]: Started sshd@7-10.0.0.82:22-10.0.0.1:56008.service - OpenSSH per-connection server daemon (10.0.0.1:56008). May 13 23:49:03.351629 sshd[3787]: Accepted publickey for core from 10.0.0.1 port 56008 ssh2: RSA SHA256:OJP9RQeqgGpOjAZaZzevsTVvmgqdZ2yoHQkAtvY14+M May 13 23:49:03.353768 sshd-session[3787]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:49:03.358255 systemd-logind[1457]: New session 8 of user core. May 13 23:49:03.369759 systemd[1]: Started session-8.scope - Session 8 of User core. May 13 23:49:03.555679 sshd[3789]: Connection closed by 10.0.0.1 port 56008 May 13 23:49:03.555147 sshd-session[3787]: pam_unix(sshd:session): session closed for user core May 13 23:49:03.560923 systemd[1]: sshd@7-10.0.0.82:22-10.0.0.1:56008.service: Deactivated successfully. May 13 23:49:03.563974 systemd[1]: session-8.scope: Deactivated successfully. May 13 23:49:03.565099 systemd-logind[1457]: Session 8 logged out. Waiting for processes to exit. May 13 23:49:03.569194 systemd-logind[1457]: Removed session 8. May 13 23:49:06.181007 containerd[1479]: time="2025-05-13T23:49:06.180965322Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wbtkz,Uid:e19e4307-fe13-490d-a3e6-6829c87953d9,Namespace:calico-system,Attempt:0,}" May 13 23:49:06.181540 containerd[1479]: time="2025-05-13T23:49:06.181044530Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qqgk2,Uid:e5dd69a2-e3f1-4b50-820a-67e59e00cb88,Namespace:kube-system,Attempt:0,}" May 13 23:49:06.611629 systemd-networkd[1387]: calic169d9bf1ad: Link UP May 13 23:49:06.612273 systemd-networkd[1387]: calic169d9bf1ad: Gained carrier May 13 23:49:06.632426 kubelet[2582]: I0513 23:49:06.632327 2582 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-gmck2" podStartSLOduration=9.435405169 podStartE2EDuration="20.632308211s" podCreationTimestamp="2025-05-13 23:48:46 +0000 UTC" firstStartedPulling="2025-05-13 23:48:46.433269779 +0000 UTC m=+13.367155653" lastFinishedPulling="2025-05-13 23:48:57.630172821 +0000 UTC m=+24.564058695" observedRunningTime="2025-05-13 23:48:58.355153825 +0000 UTC m=+25.289039699" watchObservedRunningTime="2025-05-13 23:49:06.632308211 +0000 UTC m=+33.566194085" May 13 23:49:06.638432 containerd[1479]: 2025-05-13 23:49:06.245 [INFO][3871] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 13 23:49:06.638432 containerd[1479]: 2025-05-13 23:49:06.371 [INFO][3871] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--qqgk2-eth0 coredns-668d6bf9bc- kube-system e5dd69a2-e3f1-4b50-820a-67e59e00cb88 653 0 2025-05-13 23:48:39 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-qqgk2 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calic169d9bf1ad [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="f2bc70ba25a08f982030cc7b1026e4d4a3eef27ed8c26e7f7446e8ab631e87a9" Namespace="kube-system" Pod="coredns-668d6bf9bc-qqgk2" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--qqgk2-" May 13 23:49:06.638432 containerd[1479]: 2025-05-13 23:49:06.371 [INFO][3871] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="f2bc70ba25a08f982030cc7b1026e4d4a3eef27ed8c26e7f7446e8ab631e87a9" Namespace="kube-system" Pod="coredns-668d6bf9bc-qqgk2" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--qqgk2-eth0" May 13 23:49:06.638432 containerd[1479]: 2025-05-13 23:49:06.545 [INFO][3901] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f2bc70ba25a08f982030cc7b1026e4d4a3eef27ed8c26e7f7446e8ab631e87a9" HandleID="k8s-pod-network.f2bc70ba25a08f982030cc7b1026e4d4a3eef27ed8c26e7f7446e8ab631e87a9" Workload="localhost-k8s-coredns--668d6bf9bc--qqgk2-eth0" May 13 23:49:06.638706 containerd[1479]: 2025-05-13 23:49:06.568 [INFO][3901] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f2bc70ba25a08f982030cc7b1026e4d4a3eef27ed8c26e7f7446e8ab631e87a9" HandleID="k8s-pod-network.f2bc70ba25a08f982030cc7b1026e4d4a3eef27ed8c26e7f7446e8ab631e87a9" Workload="localhost-k8s-coredns--668d6bf9bc--qqgk2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004c0f0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-qqgk2", "timestamp":"2025-05-13 23:49:06.545041458 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 13 23:49:06.638706 containerd[1479]: 2025-05-13 23:49:06.568 [INFO][3901] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 23:49:06.638706 containerd[1479]: 2025-05-13 23:49:06.568 [INFO][3901] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 23:49:06.638706 containerd[1479]: 2025-05-13 23:49:06.568 [INFO][3901] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 13 23:49:06.638706 containerd[1479]: 2025-05-13 23:49:06.570 [INFO][3901] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.f2bc70ba25a08f982030cc7b1026e4d4a3eef27ed8c26e7f7446e8ab631e87a9" host="localhost" May 13 23:49:06.638706 containerd[1479]: 2025-05-13 23:49:06.576 [INFO][3901] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 13 23:49:06.638706 containerd[1479]: 2025-05-13 23:49:06.583 [INFO][3901] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 13 23:49:06.638706 containerd[1479]: 2025-05-13 23:49:06.585 [INFO][3901] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 13 23:49:06.638706 containerd[1479]: 2025-05-13 23:49:06.588 [INFO][3901] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 13 23:49:06.638706 containerd[1479]: 2025-05-13 23:49:06.588 [INFO][3901] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f2bc70ba25a08f982030cc7b1026e4d4a3eef27ed8c26e7f7446e8ab631e87a9" host="localhost" May 13 23:49:06.638913 containerd[1479]: 2025-05-13 23:49:06.589 [INFO][3901] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.f2bc70ba25a08f982030cc7b1026e4d4a3eef27ed8c26e7f7446e8ab631e87a9 May 13 23:49:06.638913 containerd[1479]: 2025-05-13 23:49:06.594 [INFO][3901] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f2bc70ba25a08f982030cc7b1026e4d4a3eef27ed8c26e7f7446e8ab631e87a9" host="localhost" May 13 23:49:06.638913 containerd[1479]: 2025-05-13 23:49:06.599 [INFO][3901] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.f2bc70ba25a08f982030cc7b1026e4d4a3eef27ed8c26e7f7446e8ab631e87a9" host="localhost" May 13 23:49:06.638913 containerd[1479]: 2025-05-13 23:49:06.600 [INFO][3901] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.f2bc70ba25a08f982030cc7b1026e4d4a3eef27ed8c26e7f7446e8ab631e87a9" host="localhost" May 13 23:49:06.638913 containerd[1479]: 2025-05-13 23:49:06.600 [INFO][3901] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 23:49:06.638913 containerd[1479]: 2025-05-13 23:49:06.600 [INFO][3901] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="f2bc70ba25a08f982030cc7b1026e4d4a3eef27ed8c26e7f7446e8ab631e87a9" HandleID="k8s-pod-network.f2bc70ba25a08f982030cc7b1026e4d4a3eef27ed8c26e7f7446e8ab631e87a9" Workload="localhost-k8s-coredns--668d6bf9bc--qqgk2-eth0" May 13 23:49:06.639041 containerd[1479]: 2025-05-13 23:49:06.602 [INFO][3871] cni-plugin/k8s.go 386: Populated endpoint ContainerID="f2bc70ba25a08f982030cc7b1026e4d4a3eef27ed8c26e7f7446e8ab631e87a9" Namespace="kube-system" Pod="coredns-668d6bf9bc-qqgk2" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--qqgk2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--qqgk2-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"e5dd69a2-e3f1-4b50-820a-67e59e00cb88", ResourceVersion:"653", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 23, 48, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-qqgk2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic169d9bf1ad", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 23:49:06.639092 containerd[1479]: 2025-05-13 23:49:06.602 [INFO][3871] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="f2bc70ba25a08f982030cc7b1026e4d4a3eef27ed8c26e7f7446e8ab631e87a9" Namespace="kube-system" Pod="coredns-668d6bf9bc-qqgk2" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--qqgk2-eth0" May 13 23:49:06.639092 containerd[1479]: 2025-05-13 23:49:06.602 [INFO][3871] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic169d9bf1ad ContainerID="f2bc70ba25a08f982030cc7b1026e4d4a3eef27ed8c26e7f7446e8ab631e87a9" Namespace="kube-system" Pod="coredns-668d6bf9bc-qqgk2" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--qqgk2-eth0" May 13 23:49:06.639092 containerd[1479]: 2025-05-13 23:49:06.612 [INFO][3871] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f2bc70ba25a08f982030cc7b1026e4d4a3eef27ed8c26e7f7446e8ab631e87a9" Namespace="kube-system" Pod="coredns-668d6bf9bc-qqgk2" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--qqgk2-eth0" May 13 23:49:06.639158 containerd[1479]: 2025-05-13 23:49:06.612 [INFO][3871] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="f2bc70ba25a08f982030cc7b1026e4d4a3eef27ed8c26e7f7446e8ab631e87a9" Namespace="kube-system" Pod="coredns-668d6bf9bc-qqgk2" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--qqgk2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--qqgk2-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"e5dd69a2-e3f1-4b50-820a-67e59e00cb88", ResourceVersion:"653", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 23, 48, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f2bc70ba25a08f982030cc7b1026e4d4a3eef27ed8c26e7f7446e8ab631e87a9", Pod:"coredns-668d6bf9bc-qqgk2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic169d9bf1ad", MAC:"c2:e8:05:ba:f2:9c", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 23:49:06.639158 containerd[1479]: 2025-05-13 23:49:06.633 [INFO][3871] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="f2bc70ba25a08f982030cc7b1026e4d4a3eef27ed8c26e7f7446e8ab631e87a9" Namespace="kube-system" Pod="coredns-668d6bf9bc-qqgk2" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--qqgk2-eth0" May 13 23:49:06.725009 systemd-networkd[1387]: calif585775f387: Link UP May 13 23:49:06.725636 systemd-networkd[1387]: calif585775f387: Gained carrier May 13 23:49:06.747993 containerd[1479]: 2025-05-13 23:49:06.244 [INFO][3877] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 13 23:49:06.747993 containerd[1479]: 2025-05-13 23:49:06.371 [INFO][3877] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--wbtkz-eth0 csi-node-driver- calico-system e19e4307-fe13-490d-a3e6-6829c87953d9 581 0 2025-05-13 23:48:46 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:5b5cc68cd5 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-wbtkz eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calif585775f387 [] []}} ContainerID="f4a3fcc8fa82076bf1700b4ea7da4d9a42d2c240d7978efdc02a3c9c3b0d2ead" Namespace="calico-system" Pod="csi-node-driver-wbtkz" WorkloadEndpoint="localhost-k8s-csi--node--driver--wbtkz-" May 13 23:49:06.747993 containerd[1479]: 2025-05-13 23:49:06.371 [INFO][3877] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="f4a3fcc8fa82076bf1700b4ea7da4d9a42d2c240d7978efdc02a3c9c3b0d2ead" Namespace="calico-system" Pod="csi-node-driver-wbtkz" WorkloadEndpoint="localhost-k8s-csi--node--driver--wbtkz-eth0" May 13 23:49:06.747993 containerd[1479]: 2025-05-13 23:49:06.545 [INFO][3903] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f4a3fcc8fa82076bf1700b4ea7da4d9a42d2c240d7978efdc02a3c9c3b0d2ead" HandleID="k8s-pod-network.f4a3fcc8fa82076bf1700b4ea7da4d9a42d2c240d7978efdc02a3c9c3b0d2ead" Workload="localhost-k8s-csi--node--driver--wbtkz-eth0" May 13 23:49:06.747993 containerd[1479]: 2025-05-13 23:49:06.569 [INFO][3903] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f4a3fcc8fa82076bf1700b4ea7da4d9a42d2c240d7978efdc02a3c9c3b0d2ead" HandleID="k8s-pod-network.f4a3fcc8fa82076bf1700b4ea7da4d9a42d2c240d7978efdc02a3c9c3b0d2ead" Workload="localhost-k8s-csi--node--driver--wbtkz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000294d60), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-wbtkz", "timestamp":"2025-05-13 23:49:06.545032857 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 13 23:49:06.747993 containerd[1479]: 2025-05-13 23:49:06.569 [INFO][3903] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 23:49:06.747993 containerd[1479]: 2025-05-13 23:49:06.600 [INFO][3903] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 23:49:06.747993 containerd[1479]: 2025-05-13 23:49:06.600 [INFO][3903] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 13 23:49:06.747993 containerd[1479]: 2025-05-13 23:49:06.671 [INFO][3903] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.f4a3fcc8fa82076bf1700b4ea7da4d9a42d2c240d7978efdc02a3c9c3b0d2ead" host="localhost" May 13 23:49:06.747993 containerd[1479]: 2025-05-13 23:49:06.676 [INFO][3903] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 13 23:49:06.747993 containerd[1479]: 2025-05-13 23:49:06.686 [INFO][3903] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 13 23:49:06.747993 containerd[1479]: 2025-05-13 23:49:06.688 [INFO][3903] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 13 23:49:06.747993 containerd[1479]: 2025-05-13 23:49:06.691 [INFO][3903] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 13 23:49:06.747993 containerd[1479]: 2025-05-13 23:49:06.692 [INFO][3903] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f4a3fcc8fa82076bf1700b4ea7da4d9a42d2c240d7978efdc02a3c9c3b0d2ead" host="localhost" May 13 23:49:06.747993 containerd[1479]: 2025-05-13 23:49:06.694 [INFO][3903] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.f4a3fcc8fa82076bf1700b4ea7da4d9a42d2c240d7978efdc02a3c9c3b0d2ead May 13 23:49:06.747993 containerd[1479]: 2025-05-13 23:49:06.700 [INFO][3903] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f4a3fcc8fa82076bf1700b4ea7da4d9a42d2c240d7978efdc02a3c9c3b0d2ead" host="localhost" May 13 23:49:06.747993 containerd[1479]: 2025-05-13 23:49:06.711 [INFO][3903] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.f4a3fcc8fa82076bf1700b4ea7da4d9a42d2c240d7978efdc02a3c9c3b0d2ead" host="localhost" May 13 23:49:06.747993 containerd[1479]: 2025-05-13 23:49:06.711 [INFO][3903] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.f4a3fcc8fa82076bf1700b4ea7da4d9a42d2c240d7978efdc02a3c9c3b0d2ead" host="localhost" May 13 23:49:06.747993 containerd[1479]: 2025-05-13 23:49:06.712 [INFO][3903] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 23:49:06.747993 containerd[1479]: 2025-05-13 23:49:06.712 [INFO][3903] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="f4a3fcc8fa82076bf1700b4ea7da4d9a42d2c240d7978efdc02a3c9c3b0d2ead" HandleID="k8s-pod-network.f4a3fcc8fa82076bf1700b4ea7da4d9a42d2c240d7978efdc02a3c9c3b0d2ead" Workload="localhost-k8s-csi--node--driver--wbtkz-eth0" May 13 23:49:06.748672 containerd[1479]: 2025-05-13 23:49:06.716 [INFO][3877] cni-plugin/k8s.go 386: Populated endpoint ContainerID="f4a3fcc8fa82076bf1700b4ea7da4d9a42d2c240d7978efdc02a3c9c3b0d2ead" Namespace="calico-system" Pod="csi-node-driver-wbtkz" WorkloadEndpoint="localhost-k8s-csi--node--driver--wbtkz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--wbtkz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e19e4307-fe13-490d-a3e6-6829c87953d9", ResourceVersion:"581", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 23, 48, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5b5cc68cd5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-wbtkz", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif585775f387", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 23:49:06.748672 containerd[1479]: 2025-05-13 23:49:06.719 [INFO][3877] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="f4a3fcc8fa82076bf1700b4ea7da4d9a42d2c240d7978efdc02a3c9c3b0d2ead" Namespace="calico-system" Pod="csi-node-driver-wbtkz" WorkloadEndpoint="localhost-k8s-csi--node--driver--wbtkz-eth0" May 13 23:49:06.748672 containerd[1479]: 2025-05-13 23:49:06.719 [INFO][3877] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif585775f387 ContainerID="f4a3fcc8fa82076bf1700b4ea7da4d9a42d2c240d7978efdc02a3c9c3b0d2ead" Namespace="calico-system" Pod="csi-node-driver-wbtkz" WorkloadEndpoint="localhost-k8s-csi--node--driver--wbtkz-eth0" May 13 23:49:06.748672 containerd[1479]: 2025-05-13 23:49:06.725 [INFO][3877] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f4a3fcc8fa82076bf1700b4ea7da4d9a42d2c240d7978efdc02a3c9c3b0d2ead" Namespace="calico-system" Pod="csi-node-driver-wbtkz" WorkloadEndpoint="localhost-k8s-csi--node--driver--wbtkz-eth0" May 13 23:49:06.748672 containerd[1479]: 2025-05-13 23:49:06.727 [INFO][3877] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="f4a3fcc8fa82076bf1700b4ea7da4d9a42d2c240d7978efdc02a3c9c3b0d2ead" Namespace="calico-system" Pod="csi-node-driver-wbtkz" WorkloadEndpoint="localhost-k8s-csi--node--driver--wbtkz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--wbtkz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e19e4307-fe13-490d-a3e6-6829c87953d9", ResourceVersion:"581", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 23, 48, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5b5cc68cd5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f4a3fcc8fa82076bf1700b4ea7da4d9a42d2c240d7978efdc02a3c9c3b0d2ead", Pod:"csi-node-driver-wbtkz", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif585775f387", MAC:"ca:34:6b:05:70:ab", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 23:49:06.748672 containerd[1479]: 2025-05-13 23:49:06.743 [INFO][3877] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="f4a3fcc8fa82076bf1700b4ea7da4d9a42d2c240d7978efdc02a3c9c3b0d2ead" Namespace="calico-system" Pod="csi-node-driver-wbtkz" WorkloadEndpoint="localhost-k8s-csi--node--driver--wbtkz-eth0" May 13 23:49:06.815583 containerd[1479]: time="2025-05-13T23:49:06.815517055Z" level=info msg="connecting to shim f4a3fcc8fa82076bf1700b4ea7da4d9a42d2c240d7978efdc02a3c9c3b0d2ead" address="unix:///run/containerd/s/831c19781781c8c5e74db912ab182f5f6320a14e79e78546b036471a2122c60f" namespace=k8s.io protocol=ttrpc version=3 May 13 23:49:06.818150 containerd[1479]: time="2025-05-13T23:49:06.818112192Z" level=info msg="connecting to shim f2bc70ba25a08f982030cc7b1026e4d4a3eef27ed8c26e7f7446e8ab631e87a9" address="unix:///run/containerd/s/4d3ef8ea47154da7c023cce216108b45565f919dfcabb37f8daa47fd2ddb8f20" namespace=k8s.io protocol=ttrpc version=3 May 13 23:49:06.846798 systemd[1]: Started cri-containerd-f2bc70ba25a08f982030cc7b1026e4d4a3eef27ed8c26e7f7446e8ab631e87a9.scope - libcontainer container f2bc70ba25a08f982030cc7b1026e4d4a3eef27ed8c26e7f7446e8ab631e87a9. May 13 23:49:06.871781 systemd[1]: Started cri-containerd-f4a3fcc8fa82076bf1700b4ea7da4d9a42d2c240d7978efdc02a3c9c3b0d2ead.scope - libcontainer container f4a3fcc8fa82076bf1700b4ea7da4d9a42d2c240d7978efdc02a3c9c3b0d2ead. May 13 23:49:06.872217 systemd-resolved[1320]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 13 23:49:06.884825 systemd-resolved[1320]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 13 23:49:06.898222 containerd[1479]: time="2025-05-13T23:49:06.898180953Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wbtkz,Uid:e19e4307-fe13-490d-a3e6-6829c87953d9,Namespace:calico-system,Attempt:0,} returns sandbox id \"f4a3fcc8fa82076bf1700b4ea7da4d9a42d2c240d7978efdc02a3c9c3b0d2ead\"" May 13 23:49:06.899925 containerd[1479]: time="2025-05-13T23:49:06.899873840Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\"" May 13 23:49:06.933847 containerd[1479]: time="2025-05-13T23:49:06.933808197Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qqgk2,Uid:e5dd69a2-e3f1-4b50-820a-67e59e00cb88,Namespace:kube-system,Attempt:0,} returns sandbox id \"f2bc70ba25a08f982030cc7b1026e4d4a3eef27ed8c26e7f7446e8ab631e87a9\"" May 13 23:49:06.936854 containerd[1479]: time="2025-05-13T23:49:06.936681681Z" level=info msg="CreateContainer within sandbox \"f2bc70ba25a08f982030cc7b1026e4d4a3eef27ed8c26e7f7446e8ab631e87a9\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 13 23:49:06.951398 containerd[1479]: time="2025-05-13T23:49:06.951341132Z" level=info msg="Container 8b6584c59ed403f7c91ad78825c5fd698d15a0e8b461c8d102e83992eb461534: CDI devices from CRI Config.CDIDevices: []" May 13 23:49:06.958385 containerd[1479]: time="2025-05-13T23:49:06.958319782Z" level=info msg="CreateContainer within sandbox \"f2bc70ba25a08f982030cc7b1026e4d4a3eef27ed8c26e7f7446e8ab631e87a9\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8b6584c59ed403f7c91ad78825c5fd698d15a0e8b461c8d102e83992eb461534\"" May 13 23:49:06.958936 containerd[1479]: time="2025-05-13T23:49:06.958830913Z" level=info msg="StartContainer for \"8b6584c59ed403f7c91ad78825c5fd698d15a0e8b461c8d102e83992eb461534\"" May 13 23:49:06.961054 containerd[1479]: time="2025-05-13T23:49:06.961014169Z" level=info msg="connecting to shim 8b6584c59ed403f7c91ad78825c5fd698d15a0e8b461c8d102e83992eb461534" address="unix:///run/containerd/s/4d3ef8ea47154da7c023cce216108b45565f919dfcabb37f8daa47fd2ddb8f20" protocol=ttrpc version=3 May 13 23:49:06.987763 systemd[1]: Started cri-containerd-8b6584c59ed403f7c91ad78825c5fd698d15a0e8b461c8d102e83992eb461534.scope - libcontainer container 8b6584c59ed403f7c91ad78825c5fd698d15a0e8b461c8d102e83992eb461534. May 13 23:49:07.015232 containerd[1479]: time="2025-05-13T23:49:07.015193238Z" level=info msg="StartContainer for \"8b6584c59ed403f7c91ad78825c5fd698d15a0e8b461c8d102e83992eb461534\" returns successfully" May 13 23:49:07.182786 containerd[1479]: time="2025-05-13T23:49:07.182359205Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6446b7dc94-fscxf,Uid:ded66252-e36f-4904-9b47-67460f4a88c7,Namespace:calico-apiserver,Attempt:0,}" May 13 23:49:07.182786 containerd[1479]: time="2025-05-13T23:49:07.182441373Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-hm7hx,Uid:11e30734-8276-4815-bf03-40af4e03d3a6,Namespace:kube-system,Attempt:0,}" May 13 23:49:07.361776 systemd-networkd[1387]: calia94901b1c2f: Link UP May 13 23:49:07.362459 systemd-networkd[1387]: calia94901b1c2f: Gained carrier May 13 23:49:07.378139 containerd[1479]: 2025-05-13 23:49:07.214 [INFO][4092] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 13 23:49:07.378139 containerd[1479]: 2025-05-13 23:49:07.235 [INFO][4092] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--hm7hx-eth0 coredns-668d6bf9bc- kube-system 11e30734-8276-4815-bf03-40af4e03d3a6 654 0 2025-05-13 23:48:39 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-hm7hx eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calia94901b1c2f [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="bda2aa577375c87f182ecbe4c2180dd1398b96ece9699b44c3e7fbfc95f36369" Namespace="kube-system" Pod="coredns-668d6bf9bc-hm7hx" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--hm7hx-" May 13 23:49:07.378139 containerd[1479]: 2025-05-13 23:49:07.235 [INFO][4092] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="bda2aa577375c87f182ecbe4c2180dd1398b96ece9699b44c3e7fbfc95f36369" Namespace="kube-system" Pod="coredns-668d6bf9bc-hm7hx" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--hm7hx-eth0" May 13 23:49:07.378139 containerd[1479]: 2025-05-13 23:49:07.282 [INFO][4122] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bda2aa577375c87f182ecbe4c2180dd1398b96ece9699b44c3e7fbfc95f36369" HandleID="k8s-pod-network.bda2aa577375c87f182ecbe4c2180dd1398b96ece9699b44c3e7fbfc95f36369" Workload="localhost-k8s-coredns--668d6bf9bc--hm7hx-eth0" May 13 23:49:07.378139 containerd[1479]: 2025-05-13 23:49:07.318 [INFO][4122] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="bda2aa577375c87f182ecbe4c2180dd1398b96ece9699b44c3e7fbfc95f36369" HandleID="k8s-pod-network.bda2aa577375c87f182ecbe4c2180dd1398b96ece9699b44c3e7fbfc95f36369" Workload="localhost-k8s-coredns--668d6bf9bc--hm7hx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000274cb0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-hm7hx", "timestamp":"2025-05-13 23:49:07.282667506 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 13 23:49:07.378139 containerd[1479]: 2025-05-13 23:49:07.318 [INFO][4122] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 23:49:07.378139 containerd[1479]: 2025-05-13 23:49:07.318 [INFO][4122] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 23:49:07.378139 containerd[1479]: 2025-05-13 23:49:07.318 [INFO][4122] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 13 23:49:07.378139 containerd[1479]: 2025-05-13 23:49:07.320 [INFO][4122] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.bda2aa577375c87f182ecbe4c2180dd1398b96ece9699b44c3e7fbfc95f36369" host="localhost" May 13 23:49:07.378139 containerd[1479]: 2025-05-13 23:49:07.329 [INFO][4122] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 13 23:49:07.378139 containerd[1479]: 2025-05-13 23:49:07.335 [INFO][4122] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 13 23:49:07.378139 containerd[1479]: 2025-05-13 23:49:07.337 [INFO][4122] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 13 23:49:07.378139 containerd[1479]: 2025-05-13 23:49:07.339 [INFO][4122] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 13 23:49:07.378139 containerd[1479]: 2025-05-13 23:49:07.340 [INFO][4122] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.bda2aa577375c87f182ecbe4c2180dd1398b96ece9699b44c3e7fbfc95f36369" host="localhost" May 13 23:49:07.378139 containerd[1479]: 2025-05-13 23:49:07.346 [INFO][4122] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.bda2aa577375c87f182ecbe4c2180dd1398b96ece9699b44c3e7fbfc95f36369 May 13 23:49:07.378139 containerd[1479]: 2025-05-13 23:49:07.352 [INFO][4122] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.bda2aa577375c87f182ecbe4c2180dd1398b96ece9699b44c3e7fbfc95f36369" host="localhost" May 13 23:49:07.378139 containerd[1479]: 2025-05-13 23:49:07.357 [INFO][4122] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.bda2aa577375c87f182ecbe4c2180dd1398b96ece9699b44c3e7fbfc95f36369" host="localhost" May 13 23:49:07.378139 containerd[1479]: 2025-05-13 23:49:07.358 [INFO][4122] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.bda2aa577375c87f182ecbe4c2180dd1398b96ece9699b44c3e7fbfc95f36369" host="localhost" May 13 23:49:07.378139 containerd[1479]: 2025-05-13 23:49:07.358 [INFO][4122] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 23:49:07.378139 containerd[1479]: 2025-05-13 23:49:07.358 [INFO][4122] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="bda2aa577375c87f182ecbe4c2180dd1398b96ece9699b44c3e7fbfc95f36369" HandleID="k8s-pod-network.bda2aa577375c87f182ecbe4c2180dd1398b96ece9699b44c3e7fbfc95f36369" Workload="localhost-k8s-coredns--668d6bf9bc--hm7hx-eth0" May 13 23:49:07.378900 containerd[1479]: 2025-05-13 23:49:07.359 [INFO][4092] cni-plugin/k8s.go 386: Populated endpoint ContainerID="bda2aa577375c87f182ecbe4c2180dd1398b96ece9699b44c3e7fbfc95f36369" Namespace="kube-system" Pod="coredns-668d6bf9bc-hm7hx" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--hm7hx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--hm7hx-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"11e30734-8276-4815-bf03-40af4e03d3a6", ResourceVersion:"654", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 23, 48, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-hm7hx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia94901b1c2f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 23:49:07.378900 containerd[1479]: 2025-05-13 23:49:07.359 [INFO][4092] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="bda2aa577375c87f182ecbe4c2180dd1398b96ece9699b44c3e7fbfc95f36369" Namespace="kube-system" Pod="coredns-668d6bf9bc-hm7hx" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--hm7hx-eth0" May 13 23:49:07.378900 containerd[1479]: 2025-05-13 23:49:07.360 [INFO][4092] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia94901b1c2f ContainerID="bda2aa577375c87f182ecbe4c2180dd1398b96ece9699b44c3e7fbfc95f36369" Namespace="kube-system" Pod="coredns-668d6bf9bc-hm7hx" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--hm7hx-eth0" May 13 23:49:07.378900 containerd[1479]: 2025-05-13 23:49:07.362 [INFO][4092] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bda2aa577375c87f182ecbe4c2180dd1398b96ece9699b44c3e7fbfc95f36369" Namespace="kube-system" Pod="coredns-668d6bf9bc-hm7hx" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--hm7hx-eth0" May 13 23:49:07.378900 containerd[1479]: 2025-05-13 23:49:07.363 [INFO][4092] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="bda2aa577375c87f182ecbe4c2180dd1398b96ece9699b44c3e7fbfc95f36369" Namespace="kube-system" Pod="coredns-668d6bf9bc-hm7hx" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--hm7hx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--hm7hx-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"11e30734-8276-4815-bf03-40af4e03d3a6", ResourceVersion:"654", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 23, 48, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"bda2aa577375c87f182ecbe4c2180dd1398b96ece9699b44c3e7fbfc95f36369", Pod:"coredns-668d6bf9bc-hm7hx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia94901b1c2f", MAC:"7a:dc:ed:3d:ab:e4", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 23:49:07.378900 containerd[1479]: 2025-05-13 23:49:07.376 [INFO][4092] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="bda2aa577375c87f182ecbe4c2180dd1398b96ece9699b44c3e7fbfc95f36369" Namespace="kube-system" Pod="coredns-668d6bf9bc-hm7hx" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--hm7hx-eth0" May 13 23:49:07.389233 kubelet[2582]: I0513 23:49:07.389162 2582 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-qqgk2" podStartSLOduration=28.389132835 podStartE2EDuration="28.389132835s" podCreationTimestamp="2025-05-13 23:48:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 23:49:07.388286794 +0000 UTC m=+34.322172668" watchObservedRunningTime="2025-05-13 23:49:07.389132835 +0000 UTC m=+34.323018709" May 13 23:49:07.437703 containerd[1479]: time="2025-05-13T23:49:07.435353170Z" level=info msg="connecting to shim bda2aa577375c87f182ecbe4c2180dd1398b96ece9699b44c3e7fbfc95f36369" address="unix:///run/containerd/s/78180f54a5f4f15a95d9c2ffaaf106fa85ffac56138e78c4e0f90a7446b0c01c" namespace=k8s.io protocol=ttrpc version=3 May 13 23:49:07.461775 systemd[1]: Started cri-containerd-bda2aa577375c87f182ecbe4c2180dd1398b96ece9699b44c3e7fbfc95f36369.scope - libcontainer container bda2aa577375c87f182ecbe4c2180dd1398b96ece9699b44c3e7fbfc95f36369. May 13 23:49:07.477434 systemd-resolved[1320]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 13 23:49:07.499232 containerd[1479]: time="2025-05-13T23:49:07.499180786Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-hm7hx,Uid:11e30734-8276-4815-bf03-40af4e03d3a6,Namespace:kube-system,Attempt:0,} returns sandbox id \"bda2aa577375c87f182ecbe4c2180dd1398b96ece9699b44c3e7fbfc95f36369\"" May 13 23:49:07.502805 containerd[1479]: time="2025-05-13T23:49:07.502758248Z" level=info msg="CreateContainer within sandbox \"bda2aa577375c87f182ecbe4c2180dd1398b96ece9699b44c3e7fbfc95f36369\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 13 23:49:07.511596 containerd[1479]: time="2025-05-13T23:49:07.510885584Z" level=info msg="Container e9217c195d6086d2aef50608c1ddda0880a62b95f4ae3059bc1832c65abba9b7: CDI devices from CRI Config.CDIDevices: []" May 13 23:49:07.515852 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2642406303.mount: Deactivated successfully. May 13 23:49:07.521307 containerd[1479]: time="2025-05-13T23:49:07.521263615Z" level=info msg="CreateContainer within sandbox \"bda2aa577375c87f182ecbe4c2180dd1398b96ece9699b44c3e7fbfc95f36369\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e9217c195d6086d2aef50608c1ddda0880a62b95f4ae3059bc1832c65abba9b7\"" May 13 23:49:07.523117 containerd[1479]: time="2025-05-13T23:49:07.523079149Z" level=info msg="StartContainer for \"e9217c195d6086d2aef50608c1ddda0880a62b95f4ae3059bc1832c65abba9b7\"" May 13 23:49:07.524471 containerd[1479]: time="2025-05-13T23:49:07.524414996Z" level=info msg="connecting to shim e9217c195d6086d2aef50608c1ddda0880a62b95f4ae3059bc1832c65abba9b7" address="unix:///run/containerd/s/78180f54a5f4f15a95d9c2ffaaf106fa85ffac56138e78c4e0f90a7446b0c01c" protocol=ttrpc version=3 May 13 23:49:07.549763 systemd[1]: Started cri-containerd-e9217c195d6086d2aef50608c1ddda0880a62b95f4ae3059bc1832c65abba9b7.scope - libcontainer container e9217c195d6086d2aef50608c1ddda0880a62b95f4ae3059bc1832c65abba9b7. May 13 23:49:07.550049 systemd-networkd[1387]: cali46991acc06a: Link UP May 13 23:49:07.550241 systemd-networkd[1387]: cali46991acc06a: Gained carrier May 13 23:49:07.564126 containerd[1479]: 2025-05-13 23:49:07.264 [INFO][4109] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 13 23:49:07.564126 containerd[1479]: 2025-05-13 23:49:07.284 [INFO][4109] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6446b7dc94--fscxf-eth0 calico-apiserver-6446b7dc94- calico-apiserver ded66252-e36f-4904-9b47-67460f4a88c7 651 0 2025-05-13 23:48:45 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6446b7dc94 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6446b7dc94-fscxf eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali46991acc06a [] []}} ContainerID="9adbe308d17e9fce767153be652d5789cece3b75de11b765d249ee2c4fe1d4e3" Namespace="calico-apiserver" Pod="calico-apiserver-6446b7dc94-fscxf" WorkloadEndpoint="localhost-k8s-calico--apiserver--6446b7dc94--fscxf-" May 13 23:49:07.564126 containerd[1479]: 2025-05-13 23:49:07.284 [INFO][4109] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="9adbe308d17e9fce767153be652d5789cece3b75de11b765d249ee2c4fe1d4e3" Namespace="calico-apiserver" Pod="calico-apiserver-6446b7dc94-fscxf" WorkloadEndpoint="localhost-k8s-calico--apiserver--6446b7dc94--fscxf-eth0" May 13 23:49:07.564126 containerd[1479]: 2025-05-13 23:49:07.339 [INFO][4135] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9adbe308d17e9fce767153be652d5789cece3b75de11b765d249ee2c4fe1d4e3" HandleID="k8s-pod-network.9adbe308d17e9fce767153be652d5789cece3b75de11b765d249ee2c4fe1d4e3" Workload="localhost-k8s-calico--apiserver--6446b7dc94--fscxf-eth0" May 13 23:49:07.564126 containerd[1479]: 2025-05-13 23:49:07.415 [INFO][4135] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9adbe308d17e9fce767153be652d5789cece3b75de11b765d249ee2c4fe1d4e3" HandleID="k8s-pod-network.9adbe308d17e9fce767153be652d5789cece3b75de11b765d249ee2c4fe1d4e3" Workload="localhost-k8s-calico--apiserver--6446b7dc94--fscxf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000392b20), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6446b7dc94-fscxf", "timestamp":"2025-05-13 23:49:07.339308396 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 13 23:49:07.564126 containerd[1479]: 2025-05-13 23:49:07.415 [INFO][4135] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 23:49:07.564126 containerd[1479]: 2025-05-13 23:49:07.415 [INFO][4135] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 23:49:07.564126 containerd[1479]: 2025-05-13 23:49:07.416 [INFO][4135] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 13 23:49:07.564126 containerd[1479]: 2025-05-13 23:49:07.420 [INFO][4135] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.9adbe308d17e9fce767153be652d5789cece3b75de11b765d249ee2c4fe1d4e3" host="localhost" May 13 23:49:07.564126 containerd[1479]: 2025-05-13 23:49:07.514 [INFO][4135] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 13 23:49:07.564126 containerd[1479]: 2025-05-13 23:49:07.520 [INFO][4135] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 13 23:49:07.564126 containerd[1479]: 2025-05-13 23:49:07.522 [INFO][4135] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 13 23:49:07.564126 containerd[1479]: 2025-05-13 23:49:07.526 [INFO][4135] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 13 23:49:07.564126 containerd[1479]: 2025-05-13 23:49:07.526 [INFO][4135] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9adbe308d17e9fce767153be652d5789cece3b75de11b765d249ee2c4fe1d4e3" host="localhost" May 13 23:49:07.564126 containerd[1479]: 2025-05-13 23:49:07.528 [INFO][4135] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.9adbe308d17e9fce767153be652d5789cece3b75de11b765d249ee2c4fe1d4e3 May 13 23:49:07.564126 containerd[1479]: 2025-05-13 23:49:07.533 [INFO][4135] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9adbe308d17e9fce767153be652d5789cece3b75de11b765d249ee2c4fe1d4e3" host="localhost" May 13 23:49:07.564126 containerd[1479]: 2025-05-13 23:49:07.540 [INFO][4135] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.9adbe308d17e9fce767153be652d5789cece3b75de11b765d249ee2c4fe1d4e3" host="localhost" May 13 23:49:07.564126 containerd[1479]: 2025-05-13 23:49:07.540 [INFO][4135] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.9adbe308d17e9fce767153be652d5789cece3b75de11b765d249ee2c4fe1d4e3" host="localhost" May 13 23:49:07.564126 containerd[1479]: 2025-05-13 23:49:07.540 [INFO][4135] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 23:49:07.564126 containerd[1479]: 2025-05-13 23:49:07.540 [INFO][4135] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="9adbe308d17e9fce767153be652d5789cece3b75de11b765d249ee2c4fe1d4e3" HandleID="k8s-pod-network.9adbe308d17e9fce767153be652d5789cece3b75de11b765d249ee2c4fe1d4e3" Workload="localhost-k8s-calico--apiserver--6446b7dc94--fscxf-eth0" May 13 23:49:07.565164 containerd[1479]: 2025-05-13 23:49:07.548 [INFO][4109] cni-plugin/k8s.go 386: Populated endpoint ContainerID="9adbe308d17e9fce767153be652d5789cece3b75de11b765d249ee2c4fe1d4e3" Namespace="calico-apiserver" Pod="calico-apiserver-6446b7dc94-fscxf" WorkloadEndpoint="localhost-k8s-calico--apiserver--6446b7dc94--fscxf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6446b7dc94--fscxf-eth0", GenerateName:"calico-apiserver-6446b7dc94-", Namespace:"calico-apiserver", SelfLink:"", UID:"ded66252-e36f-4904-9b47-67460f4a88c7", ResourceVersion:"651", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 23, 48, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6446b7dc94", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6446b7dc94-fscxf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali46991acc06a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 23:49:07.565164 containerd[1479]: 2025-05-13 23:49:07.548 [INFO][4109] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="9adbe308d17e9fce767153be652d5789cece3b75de11b765d249ee2c4fe1d4e3" Namespace="calico-apiserver" Pod="calico-apiserver-6446b7dc94-fscxf" WorkloadEndpoint="localhost-k8s-calico--apiserver--6446b7dc94--fscxf-eth0" May 13 23:49:07.565164 containerd[1479]: 2025-05-13 23:49:07.548 [INFO][4109] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali46991acc06a ContainerID="9adbe308d17e9fce767153be652d5789cece3b75de11b765d249ee2c4fe1d4e3" Namespace="calico-apiserver" Pod="calico-apiserver-6446b7dc94-fscxf" WorkloadEndpoint="localhost-k8s-calico--apiserver--6446b7dc94--fscxf-eth0" May 13 23:49:07.565164 containerd[1479]: 2025-05-13 23:49:07.550 [INFO][4109] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9adbe308d17e9fce767153be652d5789cece3b75de11b765d249ee2c4fe1d4e3" Namespace="calico-apiserver" Pod="calico-apiserver-6446b7dc94-fscxf" WorkloadEndpoint="localhost-k8s-calico--apiserver--6446b7dc94--fscxf-eth0" May 13 23:49:07.565164 containerd[1479]: 2025-05-13 23:49:07.550 [INFO][4109] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="9adbe308d17e9fce767153be652d5789cece3b75de11b765d249ee2c4fe1d4e3" Namespace="calico-apiserver" Pod="calico-apiserver-6446b7dc94-fscxf" WorkloadEndpoint="localhost-k8s-calico--apiserver--6446b7dc94--fscxf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6446b7dc94--fscxf-eth0", GenerateName:"calico-apiserver-6446b7dc94-", Namespace:"calico-apiserver", SelfLink:"", UID:"ded66252-e36f-4904-9b47-67460f4a88c7", ResourceVersion:"651", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 23, 48, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6446b7dc94", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9adbe308d17e9fce767153be652d5789cece3b75de11b765d249ee2c4fe1d4e3", Pod:"calico-apiserver-6446b7dc94-fscxf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali46991acc06a", MAC:"42:8e:43:09:6c:ff", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 23:49:07.565164 containerd[1479]: 2025-05-13 23:49:07.562 [INFO][4109] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="9adbe308d17e9fce767153be652d5789cece3b75de11b765d249ee2c4fe1d4e3" Namespace="calico-apiserver" Pod="calico-apiserver-6446b7dc94-fscxf" WorkloadEndpoint="localhost-k8s-calico--apiserver--6446b7dc94--fscxf-eth0" May 13 23:49:07.586392 containerd[1479]: time="2025-05-13T23:49:07.586351632Z" level=info msg="StartContainer for \"e9217c195d6086d2aef50608c1ddda0880a62b95f4ae3059bc1832c65abba9b7\" returns successfully" May 13 23:49:07.591954 containerd[1479]: time="2025-05-13T23:49:07.591907763Z" level=info msg="connecting to shim 9adbe308d17e9fce767153be652d5789cece3b75de11b765d249ee2c4fe1d4e3" address="unix:///run/containerd/s/d3bbb2452bbad7dbd34e5e88486889ca5ce65c892b323a02e54a77e95d4f91bd" namespace=k8s.io protocol=ttrpc version=3 May 13 23:49:07.617022 systemd[1]: Started cri-containerd-9adbe308d17e9fce767153be652d5789cece3b75de11b765d249ee2c4fe1d4e3.scope - libcontainer container 9adbe308d17e9fce767153be652d5789cece3b75de11b765d249ee2c4fe1d4e3. May 13 23:49:07.633366 systemd-resolved[1320]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 13 23:49:07.652788 containerd[1479]: time="2025-05-13T23:49:07.652748814Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6446b7dc94-fscxf,Uid:ded66252-e36f-4904-9b47-67460f4a88c7,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"9adbe308d17e9fce767153be652d5789cece3b75de11b765d249ee2c4fe1d4e3\"" May 13 23:49:07.838541 containerd[1479]: time="2025-05-13T23:49:07.838499236Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:49:07.840322 containerd[1479]: time="2025-05-13T23:49:07.840271085Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.3: active requests=0, bytes read=7474935" May 13 23:49:07.841236 containerd[1479]: time="2025-05-13T23:49:07.841186933Z" level=info msg="ImageCreate event name:\"sha256:15faf29e8b518d846c91c15785ff89e783d356ea0f2b22826f47a556ea32645b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:49:07.844685 containerd[1479]: time="2025-05-13T23:49:07.844191540Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:49:07.845340 containerd[1479]: time="2025-05-13T23:49:07.845314607Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.3\" with image id \"sha256:15faf29e8b518d846c91c15785ff89e783d356ea0f2b22826f47a556ea32645b\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\", size \"8844117\" in 945.407123ms" May 13 23:49:07.845449 containerd[1479]: time="2025-05-13T23:49:07.845432778Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\" returns image reference \"sha256:15faf29e8b518d846c91c15785ff89e783d356ea0f2b22826f47a556ea32645b\"" May 13 23:49:07.846408 containerd[1479]: time="2025-05-13T23:49:07.846383189Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" May 13 23:49:07.848162 containerd[1479]: time="2025-05-13T23:49:07.848022706Z" level=info msg="CreateContainer within sandbox \"f4a3fcc8fa82076bf1700b4ea7da4d9a42d2c240d7978efdc02a3c9c3b0d2ead\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" May 13 23:49:07.860432 containerd[1479]: time="2025-05-13T23:49:07.860380886Z" level=info msg="Container 9e55f6496e48c2cae4b197e84f58ff6c00027c735c9ad4c92b022c6012844b7b: CDI devices from CRI Config.CDIDevices: []" May 13 23:49:07.892027 containerd[1479]: time="2025-05-13T23:49:07.891976864Z" level=info msg="CreateContainer within sandbox \"f4a3fcc8fa82076bf1700b4ea7da4d9a42d2c240d7978efdc02a3c9c3b0d2ead\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"9e55f6496e48c2cae4b197e84f58ff6c00027c735c9ad4c92b022c6012844b7b\"" May 13 23:49:07.894019 containerd[1479]: time="2025-05-13T23:49:07.893982255Z" level=info msg="StartContainer for \"9e55f6496e48c2cae4b197e84f58ff6c00027c735c9ad4c92b022c6012844b7b\"" May 13 23:49:07.895810 containerd[1479]: time="2025-05-13T23:49:07.895773707Z" level=info msg="connecting to shim 9e55f6496e48c2cae4b197e84f58ff6c00027c735c9ad4c92b022c6012844b7b" address="unix:///run/containerd/s/831c19781781c8c5e74db912ab182f5f6320a14e79e78546b036471a2122c60f" protocol=ttrpc version=3 May 13 23:49:07.915804 systemd[1]: Started cri-containerd-9e55f6496e48c2cae4b197e84f58ff6c00027c735c9ad4c92b022c6012844b7b.scope - libcontainer container 9e55f6496e48c2cae4b197e84f58ff6c00027c735c9ad4c92b022c6012844b7b. May 13 23:49:07.959413 containerd[1479]: time="2025-05-13T23:49:07.959375661Z" level=info msg="StartContainer for \"9e55f6496e48c2cae4b197e84f58ff6c00027c735c9ad4c92b022c6012844b7b\" returns successfully" May 13 23:49:08.374859 systemd-networkd[1387]: calic169d9bf1ad: Gained IPv6LL May 13 23:49:08.403296 kubelet[2582]: I0513 23:49:08.403220 2582 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-hm7hx" podStartSLOduration=29.403202646 podStartE2EDuration="29.403202646s" podCreationTimestamp="2025-05-13 23:48:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 23:49:08.39025269 +0000 UTC m=+35.324138564" watchObservedRunningTime="2025-05-13 23:49:08.403202646 +0000 UTC m=+35.337088520" May 13 23:49:08.571322 systemd[1]: Started sshd@8-10.0.0.82:22-10.0.0.1:56014.service - OpenSSH per-connection server daemon (10.0.0.1:56014). May 13 23:49:08.630850 systemd-networkd[1387]: cali46991acc06a: Gained IPv6LL May 13 23:49:08.631186 systemd-networkd[1387]: calif585775f387: Gained IPv6LL May 13 23:49:08.640510 sshd[4360]: Accepted publickey for core from 10.0.0.1 port 56014 ssh2: RSA SHA256:OJP9RQeqgGpOjAZaZzevsTVvmgqdZ2yoHQkAtvY14+M May 13 23:49:08.642086 sshd-session[4360]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:49:08.646619 systemd-logind[1457]: New session 9 of user core. May 13 23:49:08.655771 systemd[1]: Started session-9.scope - Session 9 of User core. May 13 23:49:08.917530 sshd[4362]: Connection closed by 10.0.0.1 port 56014 May 13 23:49:08.916634 sshd-session[4360]: pam_unix(sshd:session): session closed for user core May 13 23:49:08.923402 systemd-logind[1457]: Session 9 logged out. Waiting for processes to exit. May 13 23:49:08.924199 systemd[1]: sshd@8-10.0.0.82:22-10.0.0.1:56014.service: Deactivated successfully. May 13 23:49:08.929005 systemd[1]: session-9.scope: Deactivated successfully. May 13 23:49:08.932485 systemd-logind[1457]: Removed session 9. May 13 23:49:08.950810 systemd-networkd[1387]: calia94901b1c2f: Gained IPv6LL May 13 23:49:09.184309 containerd[1479]: time="2025-05-13T23:49:09.183982295Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-77c58f7969-87dkr,Uid:9662eb3c-494a-430d-876b-4282439856af,Namespace:calico-system,Attempt:0,}" May 13 23:49:09.328545 systemd-networkd[1387]: cali8f39a0bf5f5: Link UP May 13 23:49:09.329192 systemd-networkd[1387]: cali8f39a0bf5f5: Gained carrier May 13 23:49:09.356365 containerd[1479]: 2025-05-13 23:49:09.211 [INFO][4403] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 13 23:49:09.356365 containerd[1479]: 2025-05-13 23:49:09.229 [INFO][4403] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--77c58f7969--87dkr-eth0 calico-kube-controllers-77c58f7969- calico-system 9662eb3c-494a-430d-876b-4282439856af 649 0 2025-05-13 23:48:46 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:77c58f7969 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-77c58f7969-87dkr eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali8f39a0bf5f5 [] []}} ContainerID="c7e5a4b46ac0ffce02f32cf3bf0417507b354b69b441c20b0d13e64b20270bdc" Namespace="calico-system" Pod="calico-kube-controllers-77c58f7969-87dkr" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--77c58f7969--87dkr-" May 13 23:49:09.356365 containerd[1479]: 2025-05-13 23:49:09.229 [INFO][4403] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="c7e5a4b46ac0ffce02f32cf3bf0417507b354b69b441c20b0d13e64b20270bdc" Namespace="calico-system" Pod="calico-kube-controllers-77c58f7969-87dkr" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--77c58f7969--87dkr-eth0" May 13 23:49:09.356365 containerd[1479]: 2025-05-13 23:49:09.259 [INFO][4418] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c7e5a4b46ac0ffce02f32cf3bf0417507b354b69b441c20b0d13e64b20270bdc" HandleID="k8s-pod-network.c7e5a4b46ac0ffce02f32cf3bf0417507b354b69b441c20b0d13e64b20270bdc" Workload="localhost-k8s-calico--kube--controllers--77c58f7969--87dkr-eth0" May 13 23:49:09.356365 containerd[1479]: 2025-05-13 23:49:09.270 [INFO][4418] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c7e5a4b46ac0ffce02f32cf3bf0417507b354b69b441c20b0d13e64b20270bdc" HandleID="k8s-pod-network.c7e5a4b46ac0ffce02f32cf3bf0417507b354b69b441c20b0d13e64b20270bdc" Workload="localhost-k8s-calico--kube--controllers--77c58f7969--87dkr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002f3bc0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-77c58f7969-87dkr", "timestamp":"2025-05-13 23:49:09.259625371 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 13 23:49:09.356365 containerd[1479]: 2025-05-13 23:49:09.270 [INFO][4418] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 23:49:09.356365 containerd[1479]: 2025-05-13 23:49:09.270 [INFO][4418] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 23:49:09.356365 containerd[1479]: 2025-05-13 23:49:09.270 [INFO][4418] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 13 23:49:09.356365 containerd[1479]: 2025-05-13 23:49:09.272 [INFO][4418] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.c7e5a4b46ac0ffce02f32cf3bf0417507b354b69b441c20b0d13e64b20270bdc" host="localhost" May 13 23:49:09.356365 containerd[1479]: 2025-05-13 23:49:09.294 [INFO][4418] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 13 23:49:09.356365 containerd[1479]: 2025-05-13 23:49:09.299 [INFO][4418] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 13 23:49:09.356365 containerd[1479]: 2025-05-13 23:49:09.301 [INFO][4418] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 13 23:49:09.356365 containerd[1479]: 2025-05-13 23:49:09.303 [INFO][4418] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 13 23:49:09.356365 containerd[1479]: 2025-05-13 23:49:09.303 [INFO][4418] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c7e5a4b46ac0ffce02f32cf3bf0417507b354b69b441c20b0d13e64b20270bdc" host="localhost" May 13 23:49:09.356365 containerd[1479]: 2025-05-13 23:49:09.304 [INFO][4418] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.c7e5a4b46ac0ffce02f32cf3bf0417507b354b69b441c20b0d13e64b20270bdc May 13 23:49:09.356365 containerd[1479]: 2025-05-13 23:49:09.313 [INFO][4418] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c7e5a4b46ac0ffce02f32cf3bf0417507b354b69b441c20b0d13e64b20270bdc" host="localhost" May 13 23:49:09.356365 containerd[1479]: 2025-05-13 23:49:09.322 [INFO][4418] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.c7e5a4b46ac0ffce02f32cf3bf0417507b354b69b441c20b0d13e64b20270bdc" host="localhost" May 13 23:49:09.356365 containerd[1479]: 2025-05-13 23:49:09.323 [INFO][4418] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.c7e5a4b46ac0ffce02f32cf3bf0417507b354b69b441c20b0d13e64b20270bdc" host="localhost" May 13 23:49:09.356365 containerd[1479]: 2025-05-13 23:49:09.323 [INFO][4418] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 23:49:09.356365 containerd[1479]: 2025-05-13 23:49:09.323 [INFO][4418] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="c7e5a4b46ac0ffce02f32cf3bf0417507b354b69b441c20b0d13e64b20270bdc" HandleID="k8s-pod-network.c7e5a4b46ac0ffce02f32cf3bf0417507b354b69b441c20b0d13e64b20270bdc" Workload="localhost-k8s-calico--kube--controllers--77c58f7969--87dkr-eth0" May 13 23:49:09.357166 containerd[1479]: 2025-05-13 23:49:09.324 [INFO][4403] cni-plugin/k8s.go 386: Populated endpoint ContainerID="c7e5a4b46ac0ffce02f32cf3bf0417507b354b69b441c20b0d13e64b20270bdc" Namespace="calico-system" Pod="calico-kube-controllers-77c58f7969-87dkr" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--77c58f7969--87dkr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--77c58f7969--87dkr-eth0", GenerateName:"calico-kube-controllers-77c58f7969-", Namespace:"calico-system", SelfLink:"", UID:"9662eb3c-494a-430d-876b-4282439856af", ResourceVersion:"649", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 23, 48, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"77c58f7969", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-77c58f7969-87dkr", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali8f39a0bf5f5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 23:49:09.357166 containerd[1479]: 2025-05-13 23:49:09.325 [INFO][4403] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="c7e5a4b46ac0ffce02f32cf3bf0417507b354b69b441c20b0d13e64b20270bdc" Namespace="calico-system" Pod="calico-kube-controllers-77c58f7969-87dkr" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--77c58f7969--87dkr-eth0" May 13 23:49:09.357166 containerd[1479]: 2025-05-13 23:49:09.325 [INFO][4403] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8f39a0bf5f5 ContainerID="c7e5a4b46ac0ffce02f32cf3bf0417507b354b69b441c20b0d13e64b20270bdc" Namespace="calico-system" Pod="calico-kube-controllers-77c58f7969-87dkr" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--77c58f7969--87dkr-eth0" May 13 23:49:09.357166 containerd[1479]: 2025-05-13 23:49:09.328 [INFO][4403] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c7e5a4b46ac0ffce02f32cf3bf0417507b354b69b441c20b0d13e64b20270bdc" Namespace="calico-system" Pod="calico-kube-controllers-77c58f7969-87dkr" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--77c58f7969--87dkr-eth0" May 13 23:49:09.357166 containerd[1479]: 2025-05-13 23:49:09.328 [INFO][4403] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="c7e5a4b46ac0ffce02f32cf3bf0417507b354b69b441c20b0d13e64b20270bdc" Namespace="calico-system" Pod="calico-kube-controllers-77c58f7969-87dkr" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--77c58f7969--87dkr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--77c58f7969--87dkr-eth0", GenerateName:"calico-kube-controllers-77c58f7969-", Namespace:"calico-system", SelfLink:"", UID:"9662eb3c-494a-430d-876b-4282439856af", ResourceVersion:"649", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 23, 48, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"77c58f7969", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c7e5a4b46ac0ffce02f32cf3bf0417507b354b69b441c20b0d13e64b20270bdc", Pod:"calico-kube-controllers-77c58f7969-87dkr", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali8f39a0bf5f5", MAC:"46:73:a8:e0:5a:b0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 23:49:09.357166 containerd[1479]: 2025-05-13 23:49:09.353 [INFO][4403] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="c7e5a4b46ac0ffce02f32cf3bf0417507b354b69b441c20b0d13e64b20270bdc" Namespace="calico-system" Pod="calico-kube-controllers-77c58f7969-87dkr" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--77c58f7969--87dkr-eth0" May 13 23:49:09.432529 containerd[1479]: time="2025-05-13T23:49:09.432444447Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:49:09.434978 containerd[1479]: time="2025-05-13T23:49:09.433960822Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=40247603" May 13 23:49:09.436764 containerd[1479]: time="2025-05-13T23:49:09.436727229Z" level=info msg="ImageCreate event name:\"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:49:09.440075 containerd[1479]: time="2025-05-13T23:49:09.440038045Z" level=info msg="connecting to shim c7e5a4b46ac0ffce02f32cf3bf0417507b354b69b441c20b0d13e64b20270bdc" address="unix:///run/containerd/s/8c697c596353e5ed95889b8574ad1ac7f66dc674f91a8de15852bc9e19902e16" namespace=k8s.io protocol=ttrpc version=3 May 13 23:49:09.442118 containerd[1479]: time="2025-05-13T23:49:09.442077667Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:49:09.442850 containerd[1479]: time="2025-05-13T23:49:09.442819453Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"41616801\" in 1.596112873s" May 13 23:49:09.442923 containerd[1479]: time="2025-05-13T23:49:09.442852216Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\"" May 13 23:49:09.444440 containerd[1479]: time="2025-05-13T23:49:09.444409315Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\"" May 13 23:49:09.447512 containerd[1479]: time="2025-05-13T23:49:09.447204765Z" level=info msg="CreateContainer within sandbox \"9adbe308d17e9fce767153be652d5789cece3b75de11b765d249ee2c4fe1d4e3\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 13 23:49:09.457193 containerd[1479]: time="2025-05-13T23:49:09.457153374Z" level=info msg="Container 8fe2d528993d9a5da538dbcf600caf2c8c0157857911d6a863eacfb54792dff5: CDI devices from CRI Config.CDIDevices: []" May 13 23:49:09.467210 containerd[1479]: time="2025-05-13T23:49:09.467082741Z" level=info msg="CreateContainer within sandbox \"9adbe308d17e9fce767153be652d5789cece3b75de11b765d249ee2c4fe1d4e3\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"8fe2d528993d9a5da538dbcf600caf2c8c0157857911d6a863eacfb54792dff5\"" May 13 23:49:09.467723 containerd[1479]: time="2025-05-13T23:49:09.467698716Z" level=info msg="StartContainer for \"8fe2d528993d9a5da538dbcf600caf2c8c0157857911d6a863eacfb54792dff5\"" May 13 23:49:09.469172 containerd[1479]: time="2025-05-13T23:49:09.469127203Z" level=info msg="connecting to shim 8fe2d528993d9a5da538dbcf600caf2c8c0157857911d6a863eacfb54792dff5" address="unix:///run/containerd/s/d3bbb2452bbad7dbd34e5e88486889ca5ce65c892b323a02e54a77e95d4f91bd" protocol=ttrpc version=3 May 13 23:49:09.470758 systemd[1]: Started cri-containerd-c7e5a4b46ac0ffce02f32cf3bf0417507b354b69b441c20b0d13e64b20270bdc.scope - libcontainer container c7e5a4b46ac0ffce02f32cf3bf0417507b354b69b441c20b0d13e64b20270bdc. May 13 23:49:09.488103 systemd-resolved[1320]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 13 23:49:09.498752 systemd[1]: Started cri-containerd-8fe2d528993d9a5da538dbcf600caf2c8c0157857911d6a863eacfb54792dff5.scope - libcontainer container 8fe2d528993d9a5da538dbcf600caf2c8c0157857911d6a863eacfb54792dff5. May 13 23:49:09.511360 containerd[1479]: time="2025-05-13T23:49:09.511304170Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-77c58f7969-87dkr,Uid:9662eb3c-494a-430d-876b-4282439856af,Namespace:calico-system,Attempt:0,} returns sandbox id \"c7e5a4b46ac0ffce02f32cf3bf0417507b354b69b441c20b0d13e64b20270bdc\"" May 13 23:49:09.554640 containerd[1479]: time="2025-05-13T23:49:09.553398170Z" level=info msg="StartContainer for \"8fe2d528993d9a5da538dbcf600caf2c8c0157857911d6a863eacfb54792dff5\" returns successfully" May 13 23:49:10.180815 containerd[1479]: time="2025-05-13T23:49:10.180757578Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6446b7dc94-zh6x9,Uid:899bb66a-cb95-4fa5-8ad5-a9c8f93b8668,Namespace:calico-apiserver,Attempt:0,}" May 13 23:49:10.320416 systemd-networkd[1387]: cali65cb1c014cb: Link UP May 13 23:49:10.322464 systemd-networkd[1387]: cali65cb1c014cb: Gained carrier May 13 23:49:10.338216 containerd[1479]: 2025-05-13 23:49:10.211 [INFO][4551] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 13 23:49:10.338216 containerd[1479]: 2025-05-13 23:49:10.224 [INFO][4551] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6446b7dc94--zh6x9-eth0 calico-apiserver-6446b7dc94- calico-apiserver 899bb66a-cb95-4fa5-8ad5-a9c8f93b8668 652 0 2025-05-13 23:48:45 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6446b7dc94 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6446b7dc94-zh6x9 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali65cb1c014cb [] []}} ContainerID="0a62361c8526f96737e2c8f7548f26241e73d47e0090a1a314cce6fa8e6b90db" Namespace="calico-apiserver" Pod="calico-apiserver-6446b7dc94-zh6x9" WorkloadEndpoint="localhost-k8s-calico--apiserver--6446b7dc94--zh6x9-" May 13 23:49:10.338216 containerd[1479]: 2025-05-13 23:49:10.224 [INFO][4551] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="0a62361c8526f96737e2c8f7548f26241e73d47e0090a1a314cce6fa8e6b90db" Namespace="calico-apiserver" Pod="calico-apiserver-6446b7dc94-zh6x9" WorkloadEndpoint="localhost-k8s-calico--apiserver--6446b7dc94--zh6x9-eth0" May 13 23:49:10.338216 containerd[1479]: 2025-05-13 23:49:10.259 [INFO][4566] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0a62361c8526f96737e2c8f7548f26241e73d47e0090a1a314cce6fa8e6b90db" HandleID="k8s-pod-network.0a62361c8526f96737e2c8f7548f26241e73d47e0090a1a314cce6fa8e6b90db" Workload="localhost-k8s-calico--apiserver--6446b7dc94--zh6x9-eth0" May 13 23:49:10.338216 containerd[1479]: 2025-05-13 23:49:10.271 [INFO][4566] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0a62361c8526f96737e2c8f7548f26241e73d47e0090a1a314cce6fa8e6b90db" HandleID="k8s-pod-network.0a62361c8526f96737e2c8f7548f26241e73d47e0090a1a314cce6fa8e6b90db" Workload="localhost-k8s-calico--apiserver--6446b7dc94--zh6x9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000426870), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6446b7dc94-zh6x9", "timestamp":"2025-05-13 23:49:10.259048951 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 13 23:49:10.338216 containerd[1479]: 2025-05-13 23:49:10.271 [INFO][4566] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 23:49:10.338216 containerd[1479]: 2025-05-13 23:49:10.271 [INFO][4566] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 23:49:10.338216 containerd[1479]: 2025-05-13 23:49:10.271 [INFO][4566] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 13 23:49:10.338216 containerd[1479]: 2025-05-13 23:49:10.276 [INFO][4566] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.0a62361c8526f96737e2c8f7548f26241e73d47e0090a1a314cce6fa8e6b90db" host="localhost" May 13 23:49:10.338216 containerd[1479]: 2025-05-13 23:49:10.283 [INFO][4566] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 13 23:49:10.338216 containerd[1479]: 2025-05-13 23:49:10.290 [INFO][4566] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 13 23:49:10.338216 containerd[1479]: 2025-05-13 23:49:10.293 [INFO][4566] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 13 23:49:10.338216 containerd[1479]: 2025-05-13 23:49:10.296 [INFO][4566] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 13 23:49:10.338216 containerd[1479]: 2025-05-13 23:49:10.296 [INFO][4566] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.0a62361c8526f96737e2c8f7548f26241e73d47e0090a1a314cce6fa8e6b90db" host="localhost" May 13 23:49:10.338216 containerd[1479]: 2025-05-13 23:49:10.298 [INFO][4566] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.0a62361c8526f96737e2c8f7548f26241e73d47e0090a1a314cce6fa8e6b90db May 13 23:49:10.338216 containerd[1479]: 2025-05-13 23:49:10.303 [INFO][4566] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.0a62361c8526f96737e2c8f7548f26241e73d47e0090a1a314cce6fa8e6b90db" host="localhost" May 13 23:49:10.338216 containerd[1479]: 2025-05-13 23:49:10.311 [INFO][4566] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.0a62361c8526f96737e2c8f7548f26241e73d47e0090a1a314cce6fa8e6b90db" host="localhost" May 13 23:49:10.338216 containerd[1479]: 2025-05-13 23:49:10.311 [INFO][4566] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.0a62361c8526f96737e2c8f7548f26241e73d47e0090a1a314cce6fa8e6b90db" host="localhost" May 13 23:49:10.338216 containerd[1479]: 2025-05-13 23:49:10.311 [INFO][4566] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 23:49:10.338216 containerd[1479]: 2025-05-13 23:49:10.311 [INFO][4566] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="0a62361c8526f96737e2c8f7548f26241e73d47e0090a1a314cce6fa8e6b90db" HandleID="k8s-pod-network.0a62361c8526f96737e2c8f7548f26241e73d47e0090a1a314cce6fa8e6b90db" Workload="localhost-k8s-calico--apiserver--6446b7dc94--zh6x9-eth0" May 13 23:49:10.339024 containerd[1479]: 2025-05-13 23:49:10.315 [INFO][4551] cni-plugin/k8s.go 386: Populated endpoint ContainerID="0a62361c8526f96737e2c8f7548f26241e73d47e0090a1a314cce6fa8e6b90db" Namespace="calico-apiserver" Pod="calico-apiserver-6446b7dc94-zh6x9" WorkloadEndpoint="localhost-k8s-calico--apiserver--6446b7dc94--zh6x9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6446b7dc94--zh6x9-eth0", GenerateName:"calico-apiserver-6446b7dc94-", Namespace:"calico-apiserver", SelfLink:"", UID:"899bb66a-cb95-4fa5-8ad5-a9c8f93b8668", ResourceVersion:"652", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 23, 48, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6446b7dc94", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6446b7dc94-zh6x9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali65cb1c014cb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 23:49:10.339024 containerd[1479]: 2025-05-13 23:49:10.315 [INFO][4551] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.134/32] ContainerID="0a62361c8526f96737e2c8f7548f26241e73d47e0090a1a314cce6fa8e6b90db" Namespace="calico-apiserver" Pod="calico-apiserver-6446b7dc94-zh6x9" WorkloadEndpoint="localhost-k8s-calico--apiserver--6446b7dc94--zh6x9-eth0" May 13 23:49:10.339024 containerd[1479]: 2025-05-13 23:49:10.315 [INFO][4551] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali65cb1c014cb ContainerID="0a62361c8526f96737e2c8f7548f26241e73d47e0090a1a314cce6fa8e6b90db" Namespace="calico-apiserver" Pod="calico-apiserver-6446b7dc94-zh6x9" WorkloadEndpoint="localhost-k8s-calico--apiserver--6446b7dc94--zh6x9-eth0" May 13 23:49:10.339024 containerd[1479]: 2025-05-13 23:49:10.323 [INFO][4551] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0a62361c8526f96737e2c8f7548f26241e73d47e0090a1a314cce6fa8e6b90db" Namespace="calico-apiserver" Pod="calico-apiserver-6446b7dc94-zh6x9" WorkloadEndpoint="localhost-k8s-calico--apiserver--6446b7dc94--zh6x9-eth0" May 13 23:49:10.339024 containerd[1479]: 2025-05-13 23:49:10.323 [INFO][4551] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="0a62361c8526f96737e2c8f7548f26241e73d47e0090a1a314cce6fa8e6b90db" Namespace="calico-apiserver" Pod="calico-apiserver-6446b7dc94-zh6x9" WorkloadEndpoint="localhost-k8s-calico--apiserver--6446b7dc94--zh6x9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6446b7dc94--zh6x9-eth0", GenerateName:"calico-apiserver-6446b7dc94-", Namespace:"calico-apiserver", SelfLink:"", UID:"899bb66a-cb95-4fa5-8ad5-a9c8f93b8668", ResourceVersion:"652", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 23, 48, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6446b7dc94", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0a62361c8526f96737e2c8f7548f26241e73d47e0090a1a314cce6fa8e6b90db", Pod:"calico-apiserver-6446b7dc94-zh6x9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali65cb1c014cb", MAC:"6a:fd:ef:0a:19:40", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 23:49:10.339024 containerd[1479]: 2025-05-13 23:49:10.334 [INFO][4551] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="0a62361c8526f96737e2c8f7548f26241e73d47e0090a1a314cce6fa8e6b90db" Namespace="calico-apiserver" Pod="calico-apiserver-6446b7dc94-zh6x9" WorkloadEndpoint="localhost-k8s-calico--apiserver--6446b7dc94--zh6x9-eth0" May 13 23:49:10.364188 containerd[1479]: time="2025-05-13T23:49:10.364137602Z" level=info msg="connecting to shim 0a62361c8526f96737e2c8f7548f26241e73d47e0090a1a314cce6fa8e6b90db" address="unix:///run/containerd/s/461e19286692dff6dc353a01d3e1a4e90e8d60d4fcba87a99af2db7abefbd584" namespace=k8s.io protocol=ttrpc version=3 May 13 23:49:10.389051 systemd[1]: Started cri-containerd-0a62361c8526f96737e2c8f7548f26241e73d47e0090a1a314cce6fa8e6b90db.scope - libcontainer container 0a62361c8526f96737e2c8f7548f26241e73d47e0090a1a314cce6fa8e6b90db. May 13 23:49:10.401339 kubelet[2582]: I0513 23:49:10.401263 2582 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6446b7dc94-fscxf" podStartSLOduration=23.611593658 podStartE2EDuration="25.401244412s" podCreationTimestamp="2025-05-13 23:48:45 +0000 UTC" firstStartedPulling="2025-05-13 23:49:07.654011975 +0000 UTC m=+34.587897849" lastFinishedPulling="2025-05-13 23:49:09.443662729 +0000 UTC m=+36.377548603" observedRunningTime="2025-05-13 23:49:10.400885781 +0000 UTC m=+37.334771655" watchObservedRunningTime="2025-05-13 23:49:10.401244412 +0000 UTC m=+37.335130246" May 13 23:49:10.416937 systemd-resolved[1320]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 13 23:49:10.448325 containerd[1479]: time="2025-05-13T23:49:10.448017658Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6446b7dc94-zh6x9,Uid:899bb66a-cb95-4fa5-8ad5-a9c8f93b8668,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"0a62361c8526f96737e2c8f7548f26241e73d47e0090a1a314cce6fa8e6b90db\"" May 13 23:49:10.450859 containerd[1479]: time="2025-05-13T23:49:10.450810700Z" level=info msg="CreateContainer within sandbox \"0a62361c8526f96737e2c8f7548f26241e73d47e0090a1a314cce6fa8e6b90db\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 13 23:49:10.475605 containerd[1479]: time="2025-05-13T23:49:10.473959823Z" level=info msg="Container 0ba6e7e45ef023814ee63222844e24c5d0e2f4f9b3fb8974518c953c7928b50b: CDI devices from CRI Config.CDIDevices: []" May 13 23:49:10.489063 containerd[1479]: time="2025-05-13T23:49:10.488987923Z" level=info msg="CreateContainer within sandbox \"0a62361c8526f96737e2c8f7548f26241e73d47e0090a1a314cce6fa8e6b90db\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"0ba6e7e45ef023814ee63222844e24c5d0e2f4f9b3fb8974518c953c7928b50b\"" May 13 23:49:10.490035 containerd[1479]: time="2025-05-13T23:49:10.490000250Z" level=info msg="StartContainer for \"0ba6e7e45ef023814ee63222844e24c5d0e2f4f9b3fb8974518c953c7928b50b\"" May 13 23:49:10.491147 containerd[1479]: time="2025-05-13T23:49:10.491119227Z" level=info msg="connecting to shim 0ba6e7e45ef023814ee63222844e24c5d0e2f4f9b3fb8974518c953c7928b50b" address="unix:///run/containerd/s/461e19286692dff6dc353a01d3e1a4e90e8d60d4fcba87a99af2db7abefbd584" protocol=ttrpc version=3 May 13 23:49:10.514812 systemd[1]: Started cri-containerd-0ba6e7e45ef023814ee63222844e24c5d0e2f4f9b3fb8974518c953c7928b50b.scope - libcontainer container 0ba6e7e45ef023814ee63222844e24c5d0e2f4f9b3fb8974518c953c7928b50b. May 13 23:49:10.525875 containerd[1479]: time="2025-05-13T23:49:10.525243619Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:49:10.597942 containerd[1479]: time="2025-05-13T23:49:10.597851500Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3: active requests=0, bytes read=13124299" May 13 23:49:10.602044 containerd[1479]: time="2025-05-13T23:49:10.600633621Z" level=info msg="ImageCreate event name:\"sha256:a91b1f00752edc175f270a01b33683fa80818734aa2274388785eaf3364315dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:49:10.604606 containerd[1479]: time="2025-05-13T23:49:10.604570561Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:49:10.605218 containerd[1479]: time="2025-05-13T23:49:10.605183534Z" level=info msg="StartContainer for \"0ba6e7e45ef023814ee63222844e24c5d0e2f4f9b3fb8974518c953c7928b50b\" returns successfully" May 13 23:49:10.605321 containerd[1479]: time="2025-05-13T23:49:10.605301065Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" with image id \"sha256:a91b1f00752edc175f270a01b33683fa80818734aa2274388785eaf3364315dc\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\", size \"14493433\" in 1.160748736s" May 13 23:49:10.605394 containerd[1479]: time="2025-05-13T23:49:10.605323186Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" returns image reference \"sha256:a91b1f00752edc175f270a01b33683fa80818734aa2274388785eaf3364315dc\"" May 13 23:49:10.606946 containerd[1479]: time="2025-05-13T23:49:10.606631060Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\"" May 13 23:49:10.608063 containerd[1479]: time="2025-05-13T23:49:10.608034141Z" level=info msg="CreateContainer within sandbox \"f4a3fcc8fa82076bf1700b4ea7da4d9a42d2c240d7978efdc02a3c9c3b0d2ead\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" May 13 23:49:10.620656 containerd[1479]: time="2025-05-13T23:49:10.620602748Z" level=info msg="Container b6b053543ce9e10a25da60b29a986c974ec80695e3c1159bbee27a2873d2474c: CDI devices from CRI Config.CDIDevices: []" May 13 23:49:10.631560 containerd[1479]: time="2025-05-13T23:49:10.631506532Z" level=info msg="CreateContainer within sandbox \"f4a3fcc8fa82076bf1700b4ea7da4d9a42d2c240d7978efdc02a3c9c3b0d2ead\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"b6b053543ce9e10a25da60b29a986c974ec80695e3c1159bbee27a2873d2474c\"" May 13 23:49:10.632883 containerd[1479]: time="2025-05-13T23:49:10.632851208Z" level=info msg="StartContainer for \"b6b053543ce9e10a25da60b29a986c974ec80695e3c1159bbee27a2873d2474c\"" May 13 23:49:10.634681 containerd[1479]: time="2025-05-13T23:49:10.634649763Z" level=info msg="connecting to shim b6b053543ce9e10a25da60b29a986c974ec80695e3c1159bbee27a2873d2474c" address="unix:///run/containerd/s/831c19781781c8c5e74db912ab182f5f6320a14e79e78546b036471a2122c60f" protocol=ttrpc version=3 May 13 23:49:10.653283 systemd[1]: Started cri-containerd-b6b053543ce9e10a25da60b29a986c974ec80695e3c1159bbee27a2873d2474c.scope - libcontainer container b6b053543ce9e10a25da60b29a986c974ec80695e3c1159bbee27a2873d2474c. May 13 23:49:10.705537 containerd[1479]: time="2025-05-13T23:49:10.705401044Z" level=info msg="StartContainer for \"b6b053543ce9e10a25da60b29a986c974ec80695e3c1159bbee27a2873d2474c\" returns successfully" May 13 23:49:11.126706 systemd-networkd[1387]: cali8f39a0bf5f5: Gained IPv6LL May 13 23:49:11.252022 kubelet[2582]: I0513 23:49:11.251965 2582 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 May 13 23:49:11.255342 kubelet[2582]: I0513 23:49:11.255304 2582 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock May 13 23:49:11.411639 kubelet[2582]: I0513 23:49:11.411288 2582 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 13 23:49:11.433356 kubelet[2582]: I0513 23:49:11.433278 2582 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-wbtkz" podStartSLOduration=21.726367554 podStartE2EDuration="25.433261069s" podCreationTimestamp="2025-05-13 23:48:46 +0000 UTC" firstStartedPulling="2025-05-13 23:49:06.899590092 +0000 UTC m=+33.833475966" lastFinishedPulling="2025-05-13 23:49:10.606483607 +0000 UTC m=+37.540369481" observedRunningTime="2025-05-13 23:49:11.432596173 +0000 UTC m=+38.366482087" watchObservedRunningTime="2025-05-13 23:49:11.433261069 +0000 UTC m=+38.367146943" May 13 23:49:11.460983 kubelet[2582]: I0513 23:49:11.460897 2582 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6446b7dc94-zh6x9" podStartSLOduration=26.460879425999998 podStartE2EDuration="26.460879426s" podCreationTimestamp="2025-05-13 23:48:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 23:49:11.460774257 +0000 UTC m=+38.394660131" watchObservedRunningTime="2025-05-13 23:49:11.460879426 +0000 UTC m=+38.394765300" May 13 23:49:12.144720 containerd[1479]: time="2025-05-13T23:49:12.144646539Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:49:12.146237 containerd[1479]: time="2025-05-13T23:49:12.146008050Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.3: active requests=0, bytes read=32554116" May 13 23:49:12.148238 containerd[1479]: time="2025-05-13T23:49:12.147276473Z" level=info msg="ImageCreate event name:\"sha256:ec7c64189a2fd01b24b044fea1840d441e9884a0df32c2e9d6982cfbbea1f814\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:49:12.182309 containerd[1479]: time="2025-05-13T23:49:12.180816163Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:49:12.182309 containerd[1479]: time="2025-05-13T23:49:12.181465416Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" with image id \"sha256:ec7c64189a2fd01b24b044fea1840d441e9884a0df32c2e9d6982cfbbea1f814\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\", size \"33923266\" in 1.574797513s" May 13 23:49:12.182309 containerd[1479]: time="2025-05-13T23:49:12.181945495Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" returns image reference \"sha256:ec7c64189a2fd01b24b044fea1840d441e9884a0df32c2e9d6982cfbbea1f814\"" May 13 23:49:12.198158 containerd[1479]: time="2025-05-13T23:49:12.198109491Z" level=info msg="CreateContainer within sandbox \"c7e5a4b46ac0ffce02f32cf3bf0417507b354b69b441c20b0d13e64b20270bdc\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" May 13 23:49:12.212624 containerd[1479]: time="2025-05-13T23:49:12.212574068Z" level=info msg="Container 5da19894fead1a3e4f02b4cdb621934c3ea7b8e2468c6a61cffc0a6961a06b14: CDI devices from CRI Config.CDIDevices: []" May 13 23:49:12.235665 containerd[1479]: time="2025-05-13T23:49:12.235605583Z" level=info msg="CreateContainer within sandbox \"c7e5a4b46ac0ffce02f32cf3bf0417507b354b69b441c20b0d13e64b20270bdc\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"5da19894fead1a3e4f02b4cdb621934c3ea7b8e2468c6a61cffc0a6961a06b14\"" May 13 23:49:12.238595 containerd[1479]: time="2025-05-13T23:49:12.236299439Z" level=info msg="StartContainer for \"5da19894fead1a3e4f02b4cdb621934c3ea7b8e2468c6a61cffc0a6961a06b14\"" May 13 23:49:12.238595 containerd[1479]: time="2025-05-13T23:49:12.237467415Z" level=info msg="connecting to shim 5da19894fead1a3e4f02b4cdb621934c3ea7b8e2468c6a61cffc0a6961a06b14" address="unix:///run/containerd/s/8c697c596353e5ed95889b8574ad1ac7f66dc674f91a8de15852bc9e19902e16" protocol=ttrpc version=3 May 13 23:49:12.261805 systemd[1]: Started cri-containerd-5da19894fead1a3e4f02b4cdb621934c3ea7b8e2468c6a61cffc0a6961a06b14.scope - libcontainer container 5da19894fead1a3e4f02b4cdb621934c3ea7b8e2468c6a61cffc0a6961a06b14. May 13 23:49:12.278764 systemd-networkd[1387]: cali65cb1c014cb: Gained IPv6LL May 13 23:49:12.314341 containerd[1479]: time="2025-05-13T23:49:12.314248145Z" level=info msg="StartContainer for \"5da19894fead1a3e4f02b4cdb621934c3ea7b8e2468c6a61cffc0a6961a06b14\" returns successfully" May 13 23:49:12.415102 kubelet[2582]: I0513 23:49:12.414993 2582 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 13 23:49:12.437955 kubelet[2582]: I0513 23:49:12.437853 2582 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-77c58f7969-87dkr" podStartSLOduration=23.767078046 podStartE2EDuration="26.437833685s" podCreationTimestamp="2025-05-13 23:48:46 +0000 UTC" firstStartedPulling="2025-05-13 23:49:09.513142895 +0000 UTC m=+36.447028769" lastFinishedPulling="2025-05-13 23:49:12.183898534 +0000 UTC m=+39.117784408" observedRunningTime="2025-05-13 23:49:12.436537299 +0000 UTC m=+39.370423213" watchObservedRunningTime="2025-05-13 23:49:12.437833685 +0000 UTC m=+39.371719559" May 13 23:49:12.551948 containerd[1479]: time="2025-05-13T23:49:12.551907891Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5da19894fead1a3e4f02b4cdb621934c3ea7b8e2468c6a61cffc0a6961a06b14\" id:\"69f7acbac2087ce0b7ec34533555a4a129d7773e55a336bef50000bef6ff7ed7\" pid:4811 exited_at:{seconds:1747180152 nanos:551493417}" May 13 23:49:12.668308 kubelet[2582]: I0513 23:49:12.668137 2582 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 13 23:49:13.932738 systemd[1]: Started sshd@9-10.0.0.82:22-10.0.0.1:56510.service - OpenSSH per-connection server daemon (10.0.0.1:56510). May 13 23:49:14.005413 sshd[4853]: Accepted publickey for core from 10.0.0.1 port 56510 ssh2: RSA SHA256:OJP9RQeqgGpOjAZaZzevsTVvmgqdZ2yoHQkAtvY14+M May 13 23:49:14.010731 sshd-session[4853]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:49:14.022391 systemd-logind[1457]: New session 10 of user core. May 13 23:49:14.032824 systemd[1]: Started session-10.scope - Session 10 of User core. May 13 23:49:14.272768 sshd[4855]: Connection closed by 10.0.0.1 port 56510 May 13 23:49:14.271764 sshd-session[4853]: pam_unix(sshd:session): session closed for user core May 13 23:49:14.281796 systemd[1]: sshd@9-10.0.0.82:22-10.0.0.1:56510.service: Deactivated successfully. May 13 23:49:14.283534 systemd[1]: session-10.scope: Deactivated successfully. May 13 23:49:14.284212 systemd-logind[1457]: Session 10 logged out. Waiting for processes to exit. May 13 23:49:14.288168 systemd[1]: Started sshd@10-10.0.0.82:22-10.0.0.1:56522.service - OpenSSH per-connection server daemon (10.0.0.1:56522). May 13 23:49:14.289290 systemd-logind[1457]: Removed session 10. May 13 23:49:14.342725 sshd[4882]: Accepted publickey for core from 10.0.0.1 port 56522 ssh2: RSA SHA256:OJP9RQeqgGpOjAZaZzevsTVvmgqdZ2yoHQkAtvY14+M May 13 23:49:14.344231 sshd-session[4882]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:49:14.350368 systemd-logind[1457]: New session 11 of user core. May 13 23:49:14.361796 systemd[1]: Started session-11.scope - Session 11 of User core. May 13 23:49:14.588363 sshd[4897]: Connection closed by 10.0.0.1 port 56522 May 13 23:49:14.589249 sshd-session[4882]: pam_unix(sshd:session): session closed for user core May 13 23:49:14.603525 systemd[1]: sshd@10-10.0.0.82:22-10.0.0.1:56522.service: Deactivated successfully. May 13 23:49:14.607498 systemd[1]: session-11.scope: Deactivated successfully. May 13 23:49:14.608433 systemd-logind[1457]: Session 11 logged out. Waiting for processes to exit. May 13 23:49:14.613329 systemd[1]: Started sshd@11-10.0.0.82:22-10.0.0.1:56534.service - OpenSSH per-connection server daemon (10.0.0.1:56534). May 13 23:49:14.616990 systemd-logind[1457]: Removed session 11. May 13 23:49:14.670373 sshd[4907]: Accepted publickey for core from 10.0.0.1 port 56534 ssh2: RSA SHA256:OJP9RQeqgGpOjAZaZzevsTVvmgqdZ2yoHQkAtvY14+M May 13 23:49:14.671779 sshd-session[4907]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:49:14.677338 systemd-logind[1457]: New session 12 of user core. May 13 23:49:14.686781 systemd[1]: Started session-12.scope - Session 12 of User core. May 13 23:49:14.847733 sshd[4910]: Connection closed by 10.0.0.1 port 56534 May 13 23:49:14.850363 sshd-session[4907]: pam_unix(sshd:session): session closed for user core May 13 23:49:14.856071 systemd[1]: sshd@11-10.0.0.82:22-10.0.0.1:56534.service: Deactivated successfully. May 13 23:49:14.859292 systemd[1]: session-12.scope: Deactivated successfully. May 13 23:49:14.860244 systemd-logind[1457]: Session 12 logged out. Waiting for processes to exit. May 13 23:49:14.861126 systemd-logind[1457]: Removed session 12. May 13 23:49:15.659190 kubelet[2582]: I0513 23:49:15.659133 2582 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 13 23:49:15.735729 kubelet[2582]: I0513 23:49:15.734855 2582 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 13 23:49:15.807242 containerd[1479]: time="2025-05-13T23:49:15.807178615Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8b9c11762dcd36cd573098c860cf32ed9b23f719a0f7b22b09a81fe0be3d5444\" id:\"ac30932a2b19cd55a0942ad2b624a49ab121bab4b30b92f90487a848d03a79a7\" pid:4960 exit_status:1 exited_at:{seconds:1747180155 nanos:806809388}" May 13 23:49:15.873578 containerd[1479]: time="2025-05-13T23:49:15.873518783Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8b9c11762dcd36cd573098c860cf32ed9b23f719a0f7b22b09a81fe0be3d5444\" id:\"0d5e1881d803e5253bfdbf548ae015f9a869b1f4cc41c55531c13bd6c0b69558\" pid:4984 exit_status:1 exited_at:{seconds:1747180155 nanos:872928619}" May 13 23:49:15.975614 kernel: bpftool[5013]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set May 13 23:49:16.155099 systemd-networkd[1387]: vxlan.calico: Link UP May 13 23:49:16.155109 systemd-networkd[1387]: vxlan.calico: Gained carrier May 13 23:49:17.590747 systemd-networkd[1387]: vxlan.calico: Gained IPv6LL May 13 23:49:19.862150 systemd[1]: Started sshd@12-10.0.0.82:22-10.0.0.1:56548.service - OpenSSH per-connection server daemon (10.0.0.1:56548). May 13 23:49:19.935583 sshd[5142]: Accepted publickey for core from 10.0.0.1 port 56548 ssh2: RSA SHA256:OJP9RQeqgGpOjAZaZzevsTVvmgqdZ2yoHQkAtvY14+M May 13 23:49:19.937776 sshd-session[5142]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:49:19.946290 systemd-logind[1457]: New session 13 of user core. May 13 23:49:19.964784 systemd[1]: Started session-13.scope - Session 13 of User core. May 13 23:49:20.242477 sshd[5144]: Connection closed by 10.0.0.1 port 56548 May 13 23:49:20.242370 sshd-session[5142]: pam_unix(sshd:session): session closed for user core May 13 23:49:20.248271 systemd-logind[1457]: Session 13 logged out. Waiting for processes to exit. May 13 23:49:20.249144 systemd[1]: sshd@12-10.0.0.82:22-10.0.0.1:56548.service: Deactivated successfully. May 13 23:49:20.253738 systemd[1]: session-13.scope: Deactivated successfully. May 13 23:49:20.254759 systemd-logind[1457]: Removed session 13. May 13 23:49:25.257271 systemd[1]: Started sshd@13-10.0.0.82:22-10.0.0.1:44820.service - OpenSSH per-connection server daemon (10.0.0.1:44820). May 13 23:49:25.323514 sshd[5168]: Accepted publickey for core from 10.0.0.1 port 44820 ssh2: RSA SHA256:OJP9RQeqgGpOjAZaZzevsTVvmgqdZ2yoHQkAtvY14+M May 13 23:49:25.326693 sshd-session[5168]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:49:25.333248 systemd-logind[1457]: New session 14 of user core. May 13 23:49:25.350848 systemd[1]: Started session-14.scope - Session 14 of User core. May 13 23:49:25.498019 sshd[5170]: Connection closed by 10.0.0.1 port 44820 May 13 23:49:25.498450 sshd-session[5168]: pam_unix(sshd:session): session closed for user core May 13 23:49:25.502433 systemd[1]: sshd@13-10.0.0.82:22-10.0.0.1:44820.service: Deactivated successfully. May 13 23:49:25.504786 systemd[1]: session-14.scope: Deactivated successfully. May 13 23:49:25.505817 systemd-logind[1457]: Session 14 logged out. Waiting for processes to exit. May 13 23:49:25.507119 systemd-logind[1457]: Removed session 14. May 13 23:49:30.518859 systemd[1]: Started sshd@14-10.0.0.82:22-10.0.0.1:44836.service - OpenSSH per-connection server daemon (10.0.0.1:44836). May 13 23:49:30.580902 sshd[5193]: Accepted publickey for core from 10.0.0.1 port 44836 ssh2: RSA SHA256:OJP9RQeqgGpOjAZaZzevsTVvmgqdZ2yoHQkAtvY14+M May 13 23:49:30.582257 sshd-session[5193]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:49:30.586900 systemd-logind[1457]: New session 15 of user core. May 13 23:49:30.596837 systemd[1]: Started session-15.scope - Session 15 of User core. May 13 23:49:30.750495 sshd[5195]: Connection closed by 10.0.0.1 port 44836 May 13 23:49:30.752050 sshd-session[5193]: pam_unix(sshd:session): session closed for user core May 13 23:49:30.756958 systemd-logind[1457]: Session 15 logged out. Waiting for processes to exit. May 13 23:49:30.757082 systemd[1]: sshd@14-10.0.0.82:22-10.0.0.1:44836.service: Deactivated successfully. May 13 23:49:30.759469 systemd[1]: session-15.scope: Deactivated successfully. May 13 23:49:30.761700 systemd-logind[1457]: Removed session 15. May 13 23:49:31.333409 kubelet[2582]: I0513 23:49:31.333372 2582 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 13 23:49:35.765143 systemd[1]: Started sshd@15-10.0.0.82:22-10.0.0.1:38546.service - OpenSSH per-connection server daemon (10.0.0.1:38546). May 13 23:49:35.831644 sshd[5212]: Accepted publickey for core from 10.0.0.1 port 38546 ssh2: RSA SHA256:OJP9RQeqgGpOjAZaZzevsTVvmgqdZ2yoHQkAtvY14+M May 13 23:49:35.834160 sshd-session[5212]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:49:35.840519 systemd-logind[1457]: New session 16 of user core. May 13 23:49:35.851821 systemd[1]: Started session-16.scope - Session 16 of User core. May 13 23:49:36.002612 sshd[5214]: Connection closed by 10.0.0.1 port 38546 May 13 23:49:36.004859 sshd-session[5212]: pam_unix(sshd:session): session closed for user core May 13 23:49:36.015423 systemd[1]: sshd@15-10.0.0.82:22-10.0.0.1:38546.service: Deactivated successfully. May 13 23:49:36.019252 systemd[1]: session-16.scope: Deactivated successfully. May 13 23:49:36.021734 systemd-logind[1457]: Session 16 logged out. Waiting for processes to exit. May 13 23:49:36.024272 systemd[1]: Started sshd@16-10.0.0.82:22-10.0.0.1:38562.service - OpenSSH per-connection server daemon (10.0.0.1:38562). May 13 23:49:36.025663 systemd-logind[1457]: Removed session 16. May 13 23:49:36.080489 sshd[5227]: Accepted publickey for core from 10.0.0.1 port 38562 ssh2: RSA SHA256:OJP9RQeqgGpOjAZaZzevsTVvmgqdZ2yoHQkAtvY14+M May 13 23:49:36.081864 sshd-session[5227]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:49:36.086910 systemd-logind[1457]: New session 17 of user core. May 13 23:49:36.093756 systemd[1]: Started session-17.scope - Session 17 of User core. May 13 23:49:36.356910 sshd[5230]: Connection closed by 10.0.0.1 port 38562 May 13 23:49:36.357912 sshd-session[5227]: pam_unix(sshd:session): session closed for user core May 13 23:49:36.371240 systemd[1]: Started sshd@17-10.0.0.82:22-10.0.0.1:38576.service - OpenSSH per-connection server daemon (10.0.0.1:38576). May 13 23:49:36.371709 systemd[1]: sshd@16-10.0.0.82:22-10.0.0.1:38562.service: Deactivated successfully. May 13 23:49:36.375089 systemd[1]: session-17.scope: Deactivated successfully. May 13 23:49:36.376500 systemd-logind[1457]: Session 17 logged out. Waiting for processes to exit. May 13 23:49:36.377692 systemd-logind[1457]: Removed session 17. May 13 23:49:36.428040 sshd[5239]: Accepted publickey for core from 10.0.0.1 port 38576 ssh2: RSA SHA256:OJP9RQeqgGpOjAZaZzevsTVvmgqdZ2yoHQkAtvY14+M May 13 23:49:36.429525 sshd-session[5239]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:49:36.437329 systemd-logind[1457]: New session 18 of user core. May 13 23:49:36.439747 systemd[1]: Started session-18.scope - Session 18 of User core. May 13 23:49:37.267953 sshd[5244]: Connection closed by 10.0.0.1 port 38576 May 13 23:49:37.268472 sshd-session[5239]: pam_unix(sshd:session): session closed for user core May 13 23:49:37.277355 systemd[1]: sshd@17-10.0.0.82:22-10.0.0.1:38576.service: Deactivated successfully. May 13 23:49:37.279387 systemd[1]: session-18.scope: Deactivated successfully. May 13 23:49:37.280756 systemd-logind[1457]: Session 18 logged out. Waiting for processes to exit. May 13 23:49:37.284250 systemd[1]: Started sshd@18-10.0.0.82:22-10.0.0.1:38586.service - OpenSSH per-connection server daemon (10.0.0.1:38586). May 13 23:49:37.291143 systemd-logind[1457]: Removed session 18. May 13 23:49:37.340200 sshd[5274]: Accepted publickey for core from 10.0.0.1 port 38586 ssh2: RSA SHA256:OJP9RQeqgGpOjAZaZzevsTVvmgqdZ2yoHQkAtvY14+M May 13 23:49:37.342063 sshd-session[5274]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:49:37.346804 systemd-logind[1457]: New session 19 of user core. May 13 23:49:37.354734 systemd[1]: Started session-19.scope - Session 19 of User core. May 13 23:49:37.675610 sshd[5277]: Connection closed by 10.0.0.1 port 38586 May 13 23:49:37.675427 sshd-session[5274]: pam_unix(sshd:session): session closed for user core May 13 23:49:37.684930 systemd[1]: Started sshd@19-10.0.0.82:22-10.0.0.1:38596.service - OpenSSH per-connection server daemon (10.0.0.1:38596). May 13 23:49:37.685420 systemd[1]: sshd@18-10.0.0.82:22-10.0.0.1:38586.service: Deactivated successfully. May 13 23:49:37.688085 systemd[1]: session-19.scope: Deactivated successfully. May 13 23:49:37.689901 systemd-logind[1457]: Session 19 logged out. Waiting for processes to exit. May 13 23:49:37.692374 systemd-logind[1457]: Removed session 19. May 13 23:49:37.737045 sshd[5286]: Accepted publickey for core from 10.0.0.1 port 38596 ssh2: RSA SHA256:OJP9RQeqgGpOjAZaZzevsTVvmgqdZ2yoHQkAtvY14+M May 13 23:49:37.738433 sshd-session[5286]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:49:37.742892 systemd-logind[1457]: New session 20 of user core. May 13 23:49:37.754762 systemd[1]: Started session-20.scope - Session 20 of User core. May 13 23:49:37.887410 sshd[5291]: Connection closed by 10.0.0.1 port 38596 May 13 23:49:37.887787 sshd-session[5286]: pam_unix(sshd:session): session closed for user core May 13 23:49:37.891356 systemd[1]: sshd@19-10.0.0.82:22-10.0.0.1:38596.service: Deactivated successfully. May 13 23:49:37.893276 systemd[1]: session-20.scope: Deactivated successfully. May 13 23:49:37.894060 systemd-logind[1457]: Session 20 logged out. Waiting for processes to exit. May 13 23:49:37.895159 systemd-logind[1457]: Removed session 20. May 13 23:49:42.465234 containerd[1479]: time="2025-05-13T23:49:42.465187865Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5da19894fead1a3e4f02b4cdb621934c3ea7b8e2468c6a61cffc0a6961a06b14\" id:\"5af6ad2b401eaf2cc80dc228b0101e9c4630c2140b8a7d9b064e359cd0312f46\" pid:5321 exited_at:{seconds:1747180182 nanos:464673596}" May 13 23:49:42.905384 systemd[1]: Started sshd@20-10.0.0.82:22-10.0.0.1:50042.service - OpenSSH per-connection server daemon (10.0.0.1:50042). May 13 23:49:42.957433 sshd[5332]: Accepted publickey for core from 10.0.0.1 port 50042 ssh2: RSA SHA256:OJP9RQeqgGpOjAZaZzevsTVvmgqdZ2yoHQkAtvY14+M May 13 23:49:42.958935 sshd-session[5332]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:49:42.964541 systemd-logind[1457]: New session 21 of user core. May 13 23:49:42.974741 systemd[1]: Started session-21.scope - Session 21 of User core. May 13 23:49:43.107629 sshd[5334]: Connection closed by 10.0.0.1 port 50042 May 13 23:49:43.108189 sshd-session[5332]: pam_unix(sshd:session): session closed for user core May 13 23:49:43.111451 systemd[1]: sshd@20-10.0.0.82:22-10.0.0.1:50042.service: Deactivated successfully. May 13 23:49:43.113512 systemd[1]: session-21.scope: Deactivated successfully. May 13 23:49:43.114297 systemd-logind[1457]: Session 21 logged out. Waiting for processes to exit. May 13 23:49:43.115363 systemd-logind[1457]: Removed session 21. May 13 23:49:45.868960 containerd[1479]: time="2025-05-13T23:49:45.868918815Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8b9c11762dcd36cd573098c860cf32ed9b23f719a0f7b22b09a81fe0be3d5444\" id:\"9d5efd30375e21a19ad2ab61a18f1d39147f709d6d0069cbc756d2f9f7d9a40b\" pid:5358 exited_at:{seconds:1747180185 nanos:868566221}" May 13 23:49:48.121855 systemd[1]: Started sshd@21-10.0.0.82:22-10.0.0.1:50050.service - OpenSSH per-connection server daemon (10.0.0.1:50050). May 13 23:49:48.200984 sshd[5372]: Accepted publickey for core from 10.0.0.1 port 50050 ssh2: RSA SHA256:OJP9RQeqgGpOjAZaZzevsTVvmgqdZ2yoHQkAtvY14+M May 13 23:49:48.203389 sshd-session[5372]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:49:48.216524 systemd-logind[1457]: New session 22 of user core. May 13 23:49:48.226832 systemd[1]: Started session-22.scope - Session 22 of User core. May 13 23:49:48.436573 sshd[5374]: Connection closed by 10.0.0.1 port 50050 May 13 23:49:48.437768 sshd-session[5372]: pam_unix(sshd:session): session closed for user core May 13 23:49:48.441432 systemd[1]: sshd@21-10.0.0.82:22-10.0.0.1:50050.service: Deactivated successfully. May 13 23:49:48.443132 systemd[1]: session-22.scope: Deactivated successfully. May 13 23:49:48.445143 systemd-logind[1457]: Session 22 logged out. Waiting for processes to exit. May 13 23:49:48.446063 systemd-logind[1457]: Removed session 22. May 13 23:49:53.454693 systemd[1]: Started sshd@22-10.0.0.82:22-10.0.0.1:33816.service - OpenSSH per-connection server daemon (10.0.0.1:33816). May 13 23:49:53.505786 sshd[5389]: Accepted publickey for core from 10.0.0.1 port 33816 ssh2: RSA SHA256:OJP9RQeqgGpOjAZaZzevsTVvmgqdZ2yoHQkAtvY14+M May 13 23:49:53.507149 sshd-session[5389]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:49:53.516046 systemd-logind[1457]: New session 23 of user core. May 13 23:49:53.527752 systemd[1]: Started session-23.scope - Session 23 of User core. May 13 23:49:53.727283 sshd[5391]: Connection closed by 10.0.0.1 port 33816 May 13 23:49:53.727624 sshd-session[5389]: pam_unix(sshd:session): session closed for user core May 13 23:49:53.731593 systemd[1]: sshd@22-10.0.0.82:22-10.0.0.1:33816.service: Deactivated successfully. May 13 23:49:53.735194 systemd[1]: session-23.scope: Deactivated successfully. May 13 23:49:53.736957 systemd-logind[1457]: Session 23 logged out. Waiting for processes to exit. May 13 23:49:53.738062 systemd-logind[1457]: Removed session 23.