May 13 12:36:55.824808 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] May 13 12:36:55.824830 kernel: Linux version 6.12.28-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT Tue May 13 11:28:23 -00 2025 May 13 12:36:55.824839 kernel: KASLR enabled May 13 12:36:55.824845 kernel: efi: EFI v2.7 by EDK II May 13 12:36:55.824850 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb832018 ACPI 2.0=0xdbfd0018 RNG=0xdbfd0a18 MEMRESERVE=0xdb838218 May 13 12:36:55.824855 kernel: random: crng init done May 13 12:36:55.824862 kernel: secureboot: Secure boot disabled May 13 12:36:55.824868 kernel: ACPI: Early table checksum verification disabled May 13 12:36:55.824874 kernel: ACPI: RSDP 0x00000000DBFD0018 000024 (v02 BOCHS ) May 13 12:36:55.824881 kernel: ACPI: XSDT 0x00000000DBFD0F18 000064 (v01 BOCHS BXPC 00000001 01000013) May 13 12:36:55.824899 kernel: ACPI: FACP 0x00000000DBFD0B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) May 13 12:36:55.824906 kernel: ACPI: DSDT 0x00000000DBF0E018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 13 12:36:55.824912 kernel: ACPI: APIC 0x00000000DBFD0C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) May 13 12:36:55.824918 kernel: ACPI: PPTT 0x00000000DBFD0098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) May 13 12:36:55.824925 kernel: ACPI: GTDT 0x00000000DBFD0818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 13 12:36:55.824933 kernel: ACPI: MCFG 0x00000000DBFD0A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 12:36:55.824939 kernel: ACPI: SPCR 0x00000000DBFD0918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 13 12:36:55.824945 kernel: ACPI: DBG2 0x00000000DBFD0998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) May 13 12:36:55.824951 kernel: ACPI: IORT 0x00000000DBFD0198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 13 12:36:55.824957 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 May 13 12:36:55.824963 kernel: ACPI: Use ACPI SPCR as default console: Yes May 13 12:36:55.824969 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] May 13 12:36:55.824975 kernel: NODE_DATA(0) allocated [mem 0xdc965dc0-0xdc96cfff] May 13 12:36:55.824981 kernel: Zone ranges: May 13 12:36:55.824987 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] May 13 12:36:55.824995 kernel: DMA32 empty May 13 12:36:55.825000 kernel: Normal empty May 13 12:36:55.825006 kernel: Device empty May 13 12:36:55.825012 kernel: Movable zone start for each node May 13 12:36:55.825018 kernel: Early memory node ranges May 13 12:36:55.825024 kernel: node 0: [mem 0x0000000040000000-0x00000000db81ffff] May 13 12:36:55.825030 kernel: node 0: [mem 0x00000000db820000-0x00000000db82ffff] May 13 12:36:55.825036 kernel: node 0: [mem 0x00000000db830000-0x00000000dc09ffff] May 13 12:36:55.825042 kernel: node 0: [mem 0x00000000dc0a0000-0x00000000dc2dffff] May 13 12:36:55.825048 kernel: node 0: [mem 0x00000000dc2e0000-0x00000000dc36ffff] May 13 12:36:55.825054 kernel: node 0: [mem 0x00000000dc370000-0x00000000dc45ffff] May 13 12:36:55.825060 kernel: node 0: [mem 0x00000000dc460000-0x00000000dc52ffff] May 13 12:36:55.825067 kernel: node 0: [mem 0x00000000dc530000-0x00000000dc5cffff] May 13 12:36:55.825073 kernel: node 0: [mem 0x00000000dc5d0000-0x00000000dce1ffff] May 13 12:36:55.825079 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] May 13 12:36:55.825088 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] May 13 12:36:55.825094 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] May 13 12:36:55.825100 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] May 13 12:36:55.825108 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] May 13 12:36:55.825115 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges May 13 12:36:55.825121 kernel: psci: probing for conduit method from ACPI. May 13 12:36:55.825128 kernel: psci: PSCIv1.1 detected in firmware. May 13 12:36:55.825134 kernel: psci: Using standard PSCI v0.2 function IDs May 13 12:36:55.825140 kernel: psci: Trusted OS migration not required May 13 12:36:55.825147 kernel: psci: SMC Calling Convention v1.1 May 13 12:36:55.825153 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) May 13 12:36:55.825160 kernel: percpu: Embedded 33 pages/cpu s98136 r8192 d28840 u135168 May 13 12:36:55.825167 kernel: pcpu-alloc: s98136 r8192 d28840 u135168 alloc=33*4096 May 13 12:36:55.825175 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 May 13 12:36:55.825182 kernel: Detected PIPT I-cache on CPU0 May 13 12:36:55.825189 kernel: CPU features: detected: GIC system register CPU interface May 13 12:36:55.825195 kernel: CPU features: detected: Spectre-v4 May 13 12:36:55.825201 kernel: CPU features: detected: Spectre-BHB May 13 12:36:55.825208 kernel: CPU features: kernel page table isolation forced ON by KASLR May 13 12:36:55.825214 kernel: CPU features: detected: Kernel page table isolation (KPTI) May 13 12:36:55.825221 kernel: CPU features: detected: ARM erratum 1418040 May 13 12:36:55.825227 kernel: CPU features: detected: SSBS not fully self-synchronizing May 13 12:36:55.825234 kernel: alternatives: applying boot alternatives May 13 12:36:55.825241 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=b20e935bbd8772a1b0c6883755acb6e2a52b7a903a0b8e12c8ff59ca86b84928 May 13 12:36:55.825250 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 13 12:36:55.825256 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 13 12:36:55.825263 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 13 12:36:55.825269 kernel: Fallback order for Node 0: 0 May 13 12:36:55.825275 kernel: Built 1 zonelists, mobility grouping on. Total pages: 643072 May 13 12:36:55.825282 kernel: Policy zone: DMA May 13 12:36:55.825288 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 13 12:36:55.825294 kernel: software IO TLB: SWIOTLB bounce buffer size adjusted to 2MB May 13 12:36:55.825301 kernel: software IO TLB: area num 4. May 13 12:36:55.825307 kernel: software IO TLB: SWIOTLB bounce buffer size roundup to 4MB May 13 12:36:55.825313 kernel: software IO TLB: mapped [mem 0x00000000d8c00000-0x00000000d9000000] (4MB) May 13 12:36:55.825320 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 13 12:36:55.825328 kernel: rcu: Preemptible hierarchical RCU implementation. May 13 12:36:55.825334 kernel: rcu: RCU event tracing is enabled. May 13 12:36:55.825341 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 13 12:36:55.825347 kernel: Trampoline variant of Tasks RCU enabled. May 13 12:36:55.825354 kernel: Tracing variant of Tasks RCU enabled. May 13 12:36:55.825360 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 13 12:36:55.825367 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 13 12:36:55.825373 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 13 12:36:55.825380 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 13 12:36:55.825386 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 May 13 12:36:55.825392 kernel: GICv3: 256 SPIs implemented May 13 12:36:55.825400 kernel: GICv3: 0 Extended SPIs implemented May 13 12:36:55.825406 kernel: Root IRQ handler: gic_handle_irq May 13 12:36:55.825413 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI May 13 12:36:55.825419 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 May 13 12:36:55.825425 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 May 13 12:36:55.825431 kernel: ITS [mem 0x08080000-0x0809ffff] May 13 12:36:55.825438 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400e0000 (indirect, esz 8, psz 64K, shr 1) May 13 12:36:55.825444 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400f0000 (flat, esz 8, psz 64K, shr 1) May 13 12:36:55.825451 kernel: GICv3: using LPI property table @0x0000000040100000 May 13 12:36:55.825457 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040110000 May 13 12:36:55.825464 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 13 12:36:55.825470 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 12:36:55.825478 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). May 13 12:36:55.825485 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns May 13 12:36:55.825492 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns May 13 12:36:55.825498 kernel: arm-pv: using stolen time PV May 13 12:36:55.825505 kernel: Console: colour dummy device 80x25 May 13 12:36:55.825511 kernel: ACPI: Core revision 20240827 May 13 12:36:55.825518 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) May 13 12:36:55.825525 kernel: pid_max: default: 32768 minimum: 301 May 13 12:36:55.825532 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima May 13 12:36:55.825540 kernel: landlock: Up and running. May 13 12:36:55.825546 kernel: SELinux: Initializing. May 13 12:36:55.825552 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 13 12:36:55.825559 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 13 12:36:55.825566 kernel: ACPI PPTT: PPTT table found, but unable to locate core 3 (3) May 13 12:36:55.825573 kernel: rcu: Hierarchical SRCU implementation. May 13 12:36:55.825580 kernel: rcu: Max phase no-delay instances is 400. May 13 12:36:55.825587 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level May 13 12:36:55.825593 kernel: Remapping and enabling EFI services. May 13 12:36:55.825601 kernel: smp: Bringing up secondary CPUs ... May 13 12:36:55.825612 kernel: Detected PIPT I-cache on CPU1 May 13 12:36:55.825619 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 May 13 12:36:55.825628 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040120000 May 13 12:36:55.825635 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 12:36:55.825641 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] May 13 12:36:55.825648 kernel: Detected PIPT I-cache on CPU2 May 13 12:36:55.825655 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 May 13 12:36:55.825662 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040130000 May 13 12:36:55.825670 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 12:36:55.825677 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] May 13 12:36:55.825684 kernel: Detected PIPT I-cache on CPU3 May 13 12:36:55.825691 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 May 13 12:36:55.825698 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040140000 May 13 12:36:55.825710 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 12:36:55.825720 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] May 13 12:36:55.825727 kernel: smp: Brought up 1 node, 4 CPUs May 13 12:36:55.825734 kernel: SMP: Total of 4 processors activated. May 13 12:36:55.825743 kernel: CPU: All CPU(s) started at EL1 May 13 12:36:55.825750 kernel: CPU features: detected: 32-bit EL0 Support May 13 12:36:55.825756 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence May 13 12:36:55.825764 kernel: CPU features: detected: Common not Private translations May 13 12:36:55.825770 kernel: CPU features: detected: CRC32 instructions May 13 12:36:55.825777 kernel: CPU features: detected: Enhanced Virtualization Traps May 13 12:36:55.825784 kernel: CPU features: detected: RCpc load-acquire (LDAPR) May 13 12:36:55.825791 kernel: CPU features: detected: LSE atomic instructions May 13 12:36:55.825798 kernel: CPU features: detected: Privileged Access Never May 13 12:36:55.825806 kernel: CPU features: detected: RAS Extension Support May 13 12:36:55.825813 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) May 13 12:36:55.825820 kernel: alternatives: applying system-wide alternatives May 13 12:36:55.825827 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 May 13 12:36:55.825834 kernel: Memory: 2440920K/2572288K available (11072K kernel code, 2276K rwdata, 8932K rodata, 39488K init, 1034K bss, 125600K reserved, 0K cma-reserved) May 13 12:36:55.825841 kernel: devtmpfs: initialized May 13 12:36:55.825848 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 13 12:36:55.825855 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 13 12:36:55.825862 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL May 13 12:36:55.825871 kernel: 0 pages in range for non-PLT usage May 13 12:36:55.825878 kernel: 508528 pages in range for PLT usage May 13 12:36:55.825884 kernel: pinctrl core: initialized pinctrl subsystem May 13 12:36:55.825905 kernel: SMBIOS 3.0.0 present. May 13 12:36:55.825912 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 May 13 12:36:55.825919 kernel: DMI: Memory slots populated: 1/1 May 13 12:36:55.825926 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 13 12:36:55.825933 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations May 13 12:36:55.825940 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations May 13 12:36:55.825950 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations May 13 12:36:55.825957 kernel: audit: initializing netlink subsys (disabled) May 13 12:36:55.825964 kernel: audit: type=2000 audit(0.028:1): state=initialized audit_enabled=0 res=1 May 13 12:36:55.825971 kernel: thermal_sys: Registered thermal governor 'step_wise' May 13 12:36:55.825978 kernel: cpuidle: using governor menu May 13 12:36:55.825984 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. May 13 12:36:55.825991 kernel: ASID allocator initialised with 32768 entries May 13 12:36:55.825998 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 13 12:36:55.826005 kernel: Serial: AMBA PL011 UART driver May 13 12:36:55.826014 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 13 12:36:55.826021 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page May 13 12:36:55.826028 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages May 13 12:36:55.826035 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page May 13 12:36:55.826041 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 13 12:36:55.826048 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page May 13 12:36:55.826055 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages May 13 12:36:55.826063 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page May 13 12:36:55.826069 kernel: ACPI: Added _OSI(Module Device) May 13 12:36:55.826077 kernel: ACPI: Added _OSI(Processor Device) May 13 12:36:55.826084 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 13 12:36:55.826091 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 13 12:36:55.826098 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 13 12:36:55.826105 kernel: ACPI: Interpreter enabled May 13 12:36:55.826111 kernel: ACPI: Using GIC for interrupt routing May 13 12:36:55.826118 kernel: ACPI: MCFG table detected, 1 entries May 13 12:36:55.826125 kernel: ACPI: CPU0 has been hot-added May 13 12:36:55.826132 kernel: ACPI: CPU1 has been hot-added May 13 12:36:55.826140 kernel: ACPI: CPU2 has been hot-added May 13 12:36:55.826147 kernel: ACPI: CPU3 has been hot-added May 13 12:36:55.826154 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA May 13 12:36:55.826160 kernel: printk: legacy console [ttyAMA0] enabled May 13 12:36:55.826167 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 13 12:36:55.826295 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 13 12:36:55.826359 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] May 13 12:36:55.826417 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] May 13 12:36:55.826476 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 May 13 12:36:55.826533 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] May 13 12:36:55.826542 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] May 13 12:36:55.826550 kernel: PCI host bridge to bus 0000:00 May 13 12:36:55.826618 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] May 13 12:36:55.826675 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] May 13 12:36:55.826743 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] May 13 12:36:55.826801 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 13 12:36:55.826873 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint May 13 12:36:55.826957 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint May 13 12:36:55.827020 kernel: pci 0000:00:01.0: BAR 0 [io 0x0000-0x001f] May 13 12:36:55.827079 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff] May 13 12:36:55.827138 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] May 13 12:36:55.827196 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned May 13 12:36:55.827258 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff]: assigned May 13 12:36:55.827316 kernel: pci 0000:00:01.0: BAR 0 [io 0x1000-0x101f]: assigned May 13 12:36:55.827369 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] May 13 12:36:55.827421 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] May 13 12:36:55.827473 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] May 13 12:36:55.827482 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 May 13 12:36:55.827489 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 May 13 12:36:55.827498 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 May 13 12:36:55.827505 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 May 13 12:36:55.827511 kernel: iommu: Default domain type: Translated May 13 12:36:55.827518 kernel: iommu: DMA domain TLB invalidation policy: strict mode May 13 12:36:55.827525 kernel: efivars: Registered efivars operations May 13 12:36:55.827532 kernel: vgaarb: loaded May 13 12:36:55.827539 kernel: clocksource: Switched to clocksource arch_sys_counter May 13 12:36:55.827546 kernel: VFS: Disk quotas dquot_6.6.0 May 13 12:36:55.827553 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 13 12:36:55.827561 kernel: pnp: PnP ACPI init May 13 12:36:55.827624 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved May 13 12:36:55.827634 kernel: pnp: PnP ACPI: found 1 devices May 13 12:36:55.827641 kernel: NET: Registered PF_INET protocol family May 13 12:36:55.827648 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 13 12:36:55.827655 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 13 12:36:55.827662 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 13 12:36:55.827669 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 13 12:36:55.827678 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 13 12:36:55.827685 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 13 12:36:55.827692 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 13 12:36:55.827699 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 13 12:36:55.827714 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 13 12:36:55.827721 kernel: PCI: CLS 0 bytes, default 64 May 13 12:36:55.827728 kernel: kvm [1]: HYP mode not available May 13 12:36:55.827735 kernel: Initialise system trusted keyrings May 13 12:36:55.827742 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 13 12:36:55.827750 kernel: Key type asymmetric registered May 13 12:36:55.827757 kernel: Asymmetric key parser 'x509' registered May 13 12:36:55.827764 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) May 13 12:36:55.827771 kernel: io scheduler mq-deadline registered May 13 12:36:55.827778 kernel: io scheduler kyber registered May 13 12:36:55.827785 kernel: io scheduler bfq registered May 13 12:36:55.827792 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 May 13 12:36:55.827799 kernel: ACPI: button: Power Button [PWRB] May 13 12:36:55.827806 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 May 13 12:36:55.827871 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) May 13 12:36:55.827881 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 13 12:36:55.827896 kernel: thunder_xcv, ver 1.0 May 13 12:36:55.827904 kernel: thunder_bgx, ver 1.0 May 13 12:36:55.827911 kernel: nicpf, ver 1.0 May 13 12:36:55.827918 kernel: nicvf, ver 1.0 May 13 12:36:55.827988 kernel: rtc-efi rtc-efi.0: registered as rtc0 May 13 12:36:55.828044 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-05-13T12:36:55 UTC (1747139815) May 13 12:36:55.828056 kernel: hid: raw HID events driver (C) Jiri Kosina May 13 12:36:55.828063 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available May 13 12:36:55.828070 kernel: watchdog: NMI not fully supported May 13 12:36:55.828077 kernel: watchdog: Hard watchdog permanently disabled May 13 12:36:55.828084 kernel: NET: Registered PF_INET6 protocol family May 13 12:36:55.828091 kernel: Segment Routing with IPv6 May 13 12:36:55.828098 kernel: In-situ OAM (IOAM) with IPv6 May 13 12:36:55.828105 kernel: NET: Registered PF_PACKET protocol family May 13 12:36:55.828112 kernel: Key type dns_resolver registered May 13 12:36:55.828120 kernel: registered taskstats version 1 May 13 12:36:55.828127 kernel: Loading compiled-in X.509 certificates May 13 12:36:55.828134 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.28-flatcar: f8df872077a0531ef71a44c67653908e8a70c520' May 13 12:36:55.828141 kernel: Demotion targets for Node 0: null May 13 12:36:55.828148 kernel: Key type .fscrypt registered May 13 12:36:55.828155 kernel: Key type fscrypt-provisioning registered May 13 12:36:55.828162 kernel: ima: No TPM chip found, activating TPM-bypass! May 13 12:36:55.828169 kernel: ima: Allocated hash algorithm: sha1 May 13 12:36:55.828176 kernel: ima: No architecture policies found May 13 12:36:55.828184 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) May 13 12:36:55.828191 kernel: clk: Disabling unused clocks May 13 12:36:55.828197 kernel: PM: genpd: Disabling unused power domains May 13 12:36:55.828204 kernel: Warning: unable to open an initial console. May 13 12:36:55.828211 kernel: Freeing unused kernel memory: 39488K May 13 12:36:55.828218 kernel: Run /init as init process May 13 12:36:55.828225 kernel: with arguments: May 13 12:36:55.828232 kernel: /init May 13 12:36:55.828239 kernel: with environment: May 13 12:36:55.828247 kernel: HOME=/ May 13 12:36:55.828254 kernel: TERM=linux May 13 12:36:55.828261 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 13 12:36:55.828268 systemd[1]: Successfully made /usr/ read-only. May 13 12:36:55.828278 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 13 12:36:55.828285 systemd[1]: Detected virtualization kvm. May 13 12:36:55.828293 systemd[1]: Detected architecture arm64. May 13 12:36:55.828301 systemd[1]: Running in initrd. May 13 12:36:55.828308 systemd[1]: No hostname configured, using default hostname. May 13 12:36:55.828316 systemd[1]: Hostname set to . May 13 12:36:55.828323 systemd[1]: Initializing machine ID from VM UUID. May 13 12:36:55.828330 systemd[1]: Queued start job for default target initrd.target. May 13 12:36:55.828338 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 13 12:36:55.828345 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 13 12:36:55.828353 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 13 12:36:55.828361 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 13 12:36:55.828370 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 13 12:36:55.828378 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 13 12:36:55.828386 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 13 12:36:55.828394 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 13 12:36:55.828402 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 13 12:36:55.828409 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 13 12:36:55.828418 systemd[1]: Reached target paths.target - Path Units. May 13 12:36:55.828425 systemd[1]: Reached target slices.target - Slice Units. May 13 12:36:55.828433 systemd[1]: Reached target swap.target - Swaps. May 13 12:36:55.828440 systemd[1]: Reached target timers.target - Timer Units. May 13 12:36:55.828448 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 13 12:36:55.828455 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 13 12:36:55.828463 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 13 12:36:55.828470 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 13 12:36:55.828478 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 13 12:36:55.828487 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 13 12:36:55.828495 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 13 12:36:55.828502 systemd[1]: Reached target sockets.target - Socket Units. May 13 12:36:55.828510 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 13 12:36:55.828517 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 13 12:36:55.828525 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 13 12:36:55.828533 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). May 13 12:36:55.828541 systemd[1]: Starting systemd-fsck-usr.service... May 13 12:36:55.828549 systemd[1]: Starting systemd-journald.service - Journal Service... May 13 12:36:55.828557 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 13 12:36:55.828564 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 12:36:55.828572 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 13 12:36:55.828580 systemd[1]: Finished systemd-fsck-usr.service. May 13 12:36:55.828589 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 13 12:36:55.828597 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 13 12:36:55.828619 systemd-journald[244]: Collecting audit messages is disabled. May 13 12:36:55.828638 systemd-journald[244]: Journal started May 13 12:36:55.828658 systemd-journald[244]: Runtime Journal (/run/log/journal/e2e64570d8ae4f14af56ff4f16ea0556) is 6M, max 48.5M, 42.4M free. May 13 12:36:55.819279 systemd-modules-load[246]: Inserted module 'overlay' May 13 12:36:55.833739 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 13 12:36:55.835526 systemd[1]: Started systemd-journald.service - Journal Service. May 13 12:36:55.835544 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 13 12:36:55.841254 kernel: Bridge firewalling registered May 13 12:36:55.839493 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 13 12:36:55.840915 systemd-modules-load[246]: Inserted module 'br_netfilter' May 13 12:36:55.841569 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 13 12:36:55.843586 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 13 12:36:55.853983 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 13 12:36:55.856532 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 13 12:36:55.857999 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 13 12:36:55.861817 systemd-tmpfiles[264]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. May 13 12:36:55.865133 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 13 12:36:55.868852 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 12:36:55.871022 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 13 12:36:55.872657 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 13 12:36:55.874763 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 13 12:36:55.876796 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 13 12:36:55.900396 dracut-cmdline[287]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=b20e935bbd8772a1b0c6883755acb6e2a52b7a903a0b8e12c8ff59ca86b84928 May 13 12:36:55.915293 systemd-resolved[288]: Positive Trust Anchors: May 13 12:36:55.915310 systemd-resolved[288]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 13 12:36:55.915341 systemd-resolved[288]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 13 12:36:55.920039 systemd-resolved[288]: Defaulting to hostname 'linux'. May 13 12:36:55.920965 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 13 12:36:55.923839 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 13 12:36:55.972914 kernel: SCSI subsystem initialized May 13 12:36:55.977902 kernel: Loading iSCSI transport class v2.0-870. May 13 12:36:55.986912 kernel: iscsi: registered transport (tcp) May 13 12:36:56.000905 kernel: iscsi: registered transport (qla4xxx) May 13 12:36:56.000919 kernel: QLogic iSCSI HBA Driver May 13 12:36:56.017547 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 13 12:36:56.031958 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 13 12:36:56.034919 systemd[1]: Reached target network-pre.target - Preparation for Network. May 13 12:36:56.078993 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 13 12:36:56.080651 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 13 12:36:56.143915 kernel: raid6: neonx8 gen() 15792 MB/s May 13 12:36:56.160898 kernel: raid6: neonx4 gen() 15817 MB/s May 13 12:36:56.177901 kernel: raid6: neonx2 gen() 13223 MB/s May 13 12:36:56.194908 kernel: raid6: neonx1 gen() 10555 MB/s May 13 12:36:56.211915 kernel: raid6: int64x8 gen() 6908 MB/s May 13 12:36:56.228911 kernel: raid6: int64x4 gen() 7366 MB/s May 13 12:36:56.245900 kernel: raid6: int64x2 gen() 6109 MB/s May 13 12:36:56.262900 kernel: raid6: int64x1 gen() 5059 MB/s May 13 12:36:56.262919 kernel: raid6: using algorithm neonx4 gen() 15817 MB/s May 13 12:36:56.279911 kernel: raid6: .... xor() 12382 MB/s, rmw enabled May 13 12:36:56.279934 kernel: raid6: using neon recovery algorithm May 13 12:36:56.285240 kernel: xor: measuring software checksum speed May 13 12:36:56.285256 kernel: 8regs : 21636 MB/sec May 13 12:36:56.285269 kernel: 32regs : 21150 MB/sec May 13 12:36:56.286179 kernel: arm64_neon : 28089 MB/sec May 13 12:36:56.286192 kernel: xor: using function: arm64_neon (28089 MB/sec) May 13 12:36:56.340915 kernel: Btrfs loaded, zoned=no, fsverity=no May 13 12:36:56.346881 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 13 12:36:56.350002 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 13 12:36:56.380654 systemd-udevd[498]: Using default interface naming scheme 'v255'. May 13 12:36:56.384674 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 13 12:36:56.386323 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 13 12:36:56.405749 dracut-pre-trigger[506]: rd.md=0: removing MD RAID activation May 13 12:36:56.426364 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 13 12:36:56.428325 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 13 12:36:56.481650 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 13 12:36:56.484954 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 13 12:36:56.527556 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues May 13 12:36:56.527746 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 13 12:36:56.531169 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 13 12:36:56.531207 kernel: GPT:9289727 != 19775487 May 13 12:36:56.531218 kernel: GPT:Alternate GPT header not at the end of the disk. May 13 12:36:56.531227 kernel: GPT:9289727 != 19775487 May 13 12:36:56.531912 kernel: GPT: Use GNU Parted to correct GPT errors. May 13 12:36:56.532922 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 12:36:56.535875 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 13 12:36:56.536001 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 13 12:36:56.539761 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 13 12:36:56.541630 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 12:36:56.567378 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 13 12:36:56.572832 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 13 12:36:56.582397 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 13 12:36:56.583541 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 13 12:36:56.594952 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 13 12:36:56.595811 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 13 12:36:56.604172 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 13 12:36:56.605077 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 13 12:36:56.606561 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 13 12:36:56.608099 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 13 12:36:56.610229 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 13 12:36:56.611648 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 13 12:36:56.635550 disk-uuid[593]: Primary Header is updated. May 13 12:36:56.635550 disk-uuid[593]: Secondary Entries is updated. May 13 12:36:56.635550 disk-uuid[593]: Secondary Header is updated. May 13 12:36:56.638630 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 12:36:56.640623 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 13 12:36:57.648672 disk-uuid[599]: The operation has completed successfully. May 13 12:36:57.650270 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 12:36:57.674681 systemd[1]: disk-uuid.service: Deactivated successfully. May 13 12:36:57.674804 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 13 12:36:57.698603 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 13 12:36:57.714635 sh[614]: Success May 13 12:36:57.728369 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 13 12:36:57.728410 kernel: device-mapper: uevent: version 1.0.3 May 13 12:36:57.728421 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev May 13 12:36:57.736968 kernel: device-mapper: verity: sha256 using shash "sha256-ce" May 13 12:36:57.760358 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 13 12:36:57.762421 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 13 12:36:57.778779 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 13 12:36:57.782910 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' May 13 12:36:57.785067 kernel: BTRFS: device fsid 5ded7f9d-c045-4eec-a161-ff9af5b01d28 devid 1 transid 40 /dev/mapper/usr (253:0) scanned by mount (626) May 13 12:36:57.785095 kernel: BTRFS info (device dm-0): first mount of filesystem 5ded7f9d-c045-4eec-a161-ff9af5b01d28 May 13 12:36:57.785105 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm May 13 12:36:57.786210 kernel: BTRFS info (device dm-0): using free-space-tree May 13 12:36:57.789393 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 13 12:36:57.790425 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. May 13 12:36:57.791640 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 13 12:36:57.792375 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 13 12:36:57.794803 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 13 12:36:57.817929 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 (254:6) scanned by mount (657) May 13 12:36:57.820037 kernel: BTRFS info (device vda6): first mount of filesystem 79dad06b-b9d3-4cc5-b052-ebf459e9d4d7 May 13 12:36:57.820071 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 13 12:36:57.820906 kernel: BTRFS info (device vda6): using free-space-tree May 13 12:36:57.825922 kernel: BTRFS info (device vda6): last unmount of filesystem 79dad06b-b9d3-4cc5-b052-ebf459e9d4d7 May 13 12:36:57.827198 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 13 12:36:57.829345 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 13 12:36:57.898870 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 13 12:36:57.901523 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 13 12:36:57.942838 systemd-networkd[802]: lo: Link UP May 13 12:36:57.942850 systemd-networkd[802]: lo: Gained carrier May 13 12:36:57.943571 systemd-networkd[802]: Enumeration completed May 13 12:36:57.944382 systemd-networkd[802]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 12:36:57.944385 systemd-networkd[802]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 13 12:36:57.945180 systemd-networkd[802]: eth0: Link UP May 13 12:36:57.945184 systemd-networkd[802]: eth0: Gained carrier May 13 12:36:57.945192 systemd-networkd[802]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 12:36:57.947001 systemd[1]: Started systemd-networkd.service - Network Configuration. May 13 12:36:57.948252 systemd[1]: Reached target network.target - Network. May 13 12:36:57.964355 ignition[699]: Ignition 2.21.0 May 13 12:36:57.964369 ignition[699]: Stage: fetch-offline May 13 12:36:57.964401 ignition[699]: no configs at "/usr/lib/ignition/base.d" May 13 12:36:57.964409 ignition[699]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 12:36:57.964716 ignition[699]: parsed url from cmdline: "" May 13 12:36:57.966957 systemd-networkd[802]: eth0: DHCPv4 address 10.0.0.46/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 13 12:36:57.964720 ignition[699]: no config URL provided May 13 12:36:57.964726 ignition[699]: reading system config file "/usr/lib/ignition/user.ign" May 13 12:36:57.964739 ignition[699]: no config at "/usr/lib/ignition/user.ign" May 13 12:36:57.965515 ignition[699]: op(1): [started] loading QEMU firmware config module May 13 12:36:57.965521 ignition[699]: op(1): executing: "modprobe" "qemu_fw_cfg" May 13 12:36:57.972661 ignition[699]: op(1): [finished] loading QEMU firmware config module May 13 12:36:58.010170 ignition[699]: parsing config with SHA512: 993b581812d177308be9af2753c4a73072d2bba853f418cd03e1938125927073bff5795b91dd7fa5b01aab67c7881093f026c309e18686edd2ca2933cb9faf77 May 13 12:36:58.015928 unknown[699]: fetched base config from "system" May 13 12:36:58.015941 unknown[699]: fetched user config from "qemu" May 13 12:36:58.016299 ignition[699]: fetch-offline: fetch-offline passed May 13 12:36:58.016357 ignition[699]: Ignition finished successfully May 13 12:36:58.018954 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 13 12:36:58.020804 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 13 12:36:58.022688 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 13 12:36:58.057456 ignition[815]: Ignition 2.21.0 May 13 12:36:58.057474 ignition[815]: Stage: kargs May 13 12:36:58.057607 ignition[815]: no configs at "/usr/lib/ignition/base.d" May 13 12:36:58.057615 ignition[815]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 12:36:58.059562 ignition[815]: kargs: kargs passed May 13 12:36:58.059632 ignition[815]: Ignition finished successfully May 13 12:36:58.061649 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 13 12:36:58.063740 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 13 12:36:58.094137 ignition[823]: Ignition 2.21.0 May 13 12:36:58.094155 ignition[823]: Stage: disks May 13 12:36:58.094289 ignition[823]: no configs at "/usr/lib/ignition/base.d" May 13 12:36:58.094298 ignition[823]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 12:36:58.096756 ignition[823]: disks: disks passed May 13 12:36:58.096823 ignition[823]: Ignition finished successfully May 13 12:36:58.098740 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 13 12:36:58.100422 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 13 12:36:58.102010 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 13 12:36:58.102850 systemd[1]: Reached target local-fs.target - Local File Systems. May 13 12:36:58.104335 systemd[1]: Reached target sysinit.target - System Initialization. May 13 12:36:58.105587 systemd[1]: Reached target basic.target - Basic System. May 13 12:36:58.107649 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 13 12:36:58.134933 systemd-fsck[834]: ROOT: clean, 15/553520 files, 52789/553472 blocks May 13 12:36:58.138949 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 13 12:36:58.140738 systemd[1]: Mounting sysroot.mount - /sysroot... May 13 12:36:58.217913 kernel: EXT4-fs (vda9): mounted filesystem 02660b30-6941-48da-9f0e-501a024e2c48 r/w with ordered data mode. Quota mode: none. May 13 12:36:58.218013 systemd[1]: Mounted sysroot.mount - /sysroot. May 13 12:36:58.219203 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 13 12:36:58.221718 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 13 12:36:58.223550 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 13 12:36:58.224539 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 13 12:36:58.224578 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 13 12:36:58.224601 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 13 12:36:58.234263 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 13 12:36:58.235965 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 13 12:36:58.238900 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 (254:6) scanned by mount (843) May 13 12:36:58.241455 kernel: BTRFS info (device vda6): first mount of filesystem 79dad06b-b9d3-4cc5-b052-ebf459e9d4d7 May 13 12:36:58.241488 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 13 12:36:58.241498 kernel: BTRFS info (device vda6): using free-space-tree May 13 12:36:58.243519 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 13 12:36:58.277318 initrd-setup-root[867]: cut: /sysroot/etc/passwd: No such file or directory May 13 12:36:58.280931 initrd-setup-root[874]: cut: /sysroot/etc/group: No such file or directory May 13 12:36:58.284568 initrd-setup-root[881]: cut: /sysroot/etc/shadow: No such file or directory May 13 12:36:58.288203 initrd-setup-root[888]: cut: /sysroot/etc/gshadow: No such file or directory May 13 12:36:58.357587 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 13 12:36:58.359581 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 13 12:36:58.360938 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 13 12:36:58.381932 kernel: BTRFS info (device vda6): last unmount of filesystem 79dad06b-b9d3-4cc5-b052-ebf459e9d4d7 May 13 12:36:58.398061 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 13 12:36:58.410278 ignition[956]: INFO : Ignition 2.21.0 May 13 12:36:58.410278 ignition[956]: INFO : Stage: mount May 13 12:36:58.411525 ignition[956]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 12:36:58.411525 ignition[956]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 12:36:58.414010 ignition[956]: INFO : mount: mount passed May 13 12:36:58.414010 ignition[956]: INFO : Ignition finished successfully May 13 12:36:58.414399 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 13 12:36:58.416561 systemd[1]: Starting ignition-files.service - Ignition (files)... May 13 12:36:58.919046 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 13 12:36:58.920513 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 13 12:36:58.948903 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 (254:6) scanned by mount (970) May 13 12:36:58.950548 kernel: BTRFS info (device vda6): first mount of filesystem 79dad06b-b9d3-4cc5-b052-ebf459e9d4d7 May 13 12:36:58.950575 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 13 12:36:58.950586 kernel: BTRFS info (device vda6): using free-space-tree May 13 12:36:58.953424 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 13 12:36:58.984985 ignition[987]: INFO : Ignition 2.21.0 May 13 12:36:58.984985 ignition[987]: INFO : Stage: files May 13 12:36:58.986267 ignition[987]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 12:36:58.986267 ignition[987]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 12:36:58.988054 ignition[987]: DEBUG : files: compiled without relabeling support, skipping May 13 12:36:58.988054 ignition[987]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 13 12:36:58.988054 ignition[987]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 13 12:36:58.990979 ignition[987]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 13 12:36:58.990979 ignition[987]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 13 12:36:58.990979 ignition[987]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 13 12:36:58.990979 ignition[987]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 13 12:36:58.989745 unknown[987]: wrote ssh authorized keys file for user: core May 13 12:36:58.996144 ignition[987]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 May 13 12:36:59.153760 ignition[987]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 13 12:36:59.309082 systemd-networkd[802]: eth0: Gained IPv6LL May 13 12:36:59.459279 ignition[987]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 13 12:36:59.459279 ignition[987]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" May 13 12:36:59.462491 ignition[987]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" May 13 12:36:59.462491 ignition[987]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" May 13 12:36:59.462491 ignition[987]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" May 13 12:36:59.462491 ignition[987]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 13 12:36:59.462491 ignition[987]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 13 12:36:59.462491 ignition[987]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 13 12:36:59.462491 ignition[987]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 13 12:36:59.462491 ignition[987]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" May 13 12:36:59.462491 ignition[987]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 13 12:36:59.462491 ignition[987]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 13 12:36:59.475206 ignition[987]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 13 12:36:59.475206 ignition[987]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 13 12:36:59.475206 ignition[987]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 May 13 12:36:59.786470 ignition[987]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK May 13 12:37:00.034769 ignition[987]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 13 12:37:00.034769 ignition[987]: INFO : files: op(b): [started] processing unit "prepare-helm.service" May 13 12:37:00.037909 ignition[987]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 13 12:37:00.037909 ignition[987]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 13 12:37:00.037909 ignition[987]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" May 13 12:37:00.037909 ignition[987]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" May 13 12:37:00.037909 ignition[987]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 13 12:37:00.037909 ignition[987]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 13 12:37:00.037909 ignition[987]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" May 13 12:37:00.037909 ignition[987]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" May 13 12:37:00.054581 ignition[987]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" May 13 12:37:00.057839 ignition[987]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 13 12:37:00.060044 ignition[987]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" May 13 12:37:00.060044 ignition[987]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" May 13 12:37:00.060044 ignition[987]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" May 13 12:37:00.060044 ignition[987]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" May 13 12:37:00.060044 ignition[987]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" May 13 12:37:00.060044 ignition[987]: INFO : files: files passed May 13 12:37:00.060044 ignition[987]: INFO : Ignition finished successfully May 13 12:37:00.060278 systemd[1]: Finished ignition-files.service - Ignition (files). May 13 12:37:00.063166 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 13 12:37:00.067058 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 13 12:37:00.081826 systemd[1]: ignition-quench.service: Deactivated successfully. May 13 12:37:00.081930 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 13 12:37:00.085213 initrd-setup-root-after-ignition[1017]: grep: /sysroot/oem/oem-release: No such file or directory May 13 12:37:00.088612 initrd-setup-root-after-ignition[1023]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 13 12:37:00.090090 initrd-setup-root-after-ignition[1019]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 13 12:37:00.090090 initrd-setup-root-after-ignition[1019]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 13 12:37:00.091874 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 13 12:37:00.093251 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 13 12:37:00.095400 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 13 12:37:00.142869 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 13 12:37:00.143666 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 13 12:37:00.146142 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 13 12:37:00.146921 systemd[1]: Reached target initrd.target - Initrd Default Target. May 13 12:37:00.148264 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 13 12:37:00.148998 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 13 12:37:00.169919 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 13 12:37:00.172196 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 13 12:37:00.194402 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 13 12:37:00.195324 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 13 12:37:00.196763 systemd[1]: Stopped target timers.target - Timer Units. May 13 12:37:00.198072 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 13 12:37:00.198182 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 13 12:37:00.199987 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 13 12:37:00.201506 systemd[1]: Stopped target basic.target - Basic System. May 13 12:37:00.202805 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 13 12:37:00.204045 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 13 12:37:00.205432 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 13 12:37:00.206811 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. May 13 12:37:00.208381 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 13 12:37:00.209779 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 13 12:37:00.211188 systemd[1]: Stopped target sysinit.target - System Initialization. May 13 12:37:00.212604 systemd[1]: Stopped target local-fs.target - Local File Systems. May 13 12:37:00.213843 systemd[1]: Stopped target swap.target - Swaps. May 13 12:37:00.214950 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 13 12:37:00.215056 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 13 12:37:00.216756 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 13 12:37:00.218192 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 13 12:37:00.219593 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 13 12:37:00.219669 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 13 12:37:00.221153 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 13 12:37:00.221262 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 13 12:37:00.223392 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 13 12:37:00.223505 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 13 12:37:00.224831 systemd[1]: Stopped target paths.target - Path Units. May 13 12:37:00.226016 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 13 12:37:00.229923 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 13 12:37:00.230857 systemd[1]: Stopped target slices.target - Slice Units. May 13 12:37:00.232427 systemd[1]: Stopped target sockets.target - Socket Units. May 13 12:37:00.233588 systemd[1]: iscsid.socket: Deactivated successfully. May 13 12:37:00.233672 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 13 12:37:00.234761 systemd[1]: iscsiuio.socket: Deactivated successfully. May 13 12:37:00.234832 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 13 12:37:00.236173 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 13 12:37:00.236287 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 13 12:37:00.237541 systemd[1]: ignition-files.service: Deactivated successfully. May 13 12:37:00.237644 systemd[1]: Stopped ignition-files.service - Ignition (files). May 13 12:37:00.239422 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 13 12:37:00.240553 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 13 12:37:00.240671 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 13 12:37:00.265185 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 13 12:37:00.265821 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 13 12:37:00.265957 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 13 12:37:00.267319 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 13 12:37:00.267404 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 13 12:37:00.272875 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 13 12:37:00.274022 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 13 12:37:00.277666 ignition[1043]: INFO : Ignition 2.21.0 May 13 12:37:00.277666 ignition[1043]: INFO : Stage: umount May 13 12:37:00.279385 ignition[1043]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 12:37:00.279385 ignition[1043]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 12:37:00.279385 ignition[1043]: INFO : umount: umount passed May 13 12:37:00.279385 ignition[1043]: INFO : Ignition finished successfully May 13 12:37:00.279562 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 13 12:37:00.280345 systemd[1]: ignition-mount.service: Deactivated successfully. May 13 12:37:00.281929 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 13 12:37:00.283690 systemd[1]: Stopped target network.target - Network. May 13 12:37:00.284665 systemd[1]: ignition-disks.service: Deactivated successfully. May 13 12:37:00.284737 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 13 12:37:00.286016 systemd[1]: ignition-kargs.service: Deactivated successfully. May 13 12:37:00.286054 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 13 12:37:00.287528 systemd[1]: ignition-setup.service: Deactivated successfully. May 13 12:37:00.287574 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 13 12:37:00.289080 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 13 12:37:00.289122 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 13 12:37:00.290564 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 13 12:37:00.291940 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 13 12:37:00.300034 systemd[1]: systemd-resolved.service: Deactivated successfully. May 13 12:37:00.300130 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 13 12:37:00.303307 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 13 12:37:00.303468 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 13 12:37:00.303502 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 13 12:37:00.306591 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 13 12:37:00.316979 systemd[1]: systemd-networkd.service: Deactivated successfully. May 13 12:37:00.317087 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 13 12:37:00.320608 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 13 12:37:00.320749 systemd[1]: Stopped target network-pre.target - Preparation for Network. May 13 12:37:00.321772 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 13 12:37:00.321802 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 13 12:37:00.324585 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 13 12:37:00.329828 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 13 12:37:00.329899 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 13 12:37:00.333936 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 13 12:37:00.333984 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 13 12:37:00.336977 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 13 12:37:00.337018 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 13 12:37:00.337835 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 13 12:37:00.342218 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 13 12:37:00.342486 systemd[1]: sysroot-boot.service: Deactivated successfully. May 13 12:37:00.342563 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 13 12:37:00.345510 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 13 12:37:00.345592 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 13 12:37:00.350184 systemd[1]: network-cleanup.service: Deactivated successfully. May 13 12:37:00.350267 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 13 12:37:00.354526 systemd[1]: systemd-udevd.service: Deactivated successfully. May 13 12:37:00.354639 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 13 12:37:00.356050 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 13 12:37:00.356085 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 13 12:37:00.357441 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 13 12:37:00.357470 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 13 12:37:00.359182 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 13 12:37:00.359225 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 13 12:37:00.361481 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 13 12:37:00.361523 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 13 12:37:00.363641 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 13 12:37:00.363696 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 12:37:00.366611 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 13 12:37:00.368177 systemd[1]: systemd-network-generator.service: Deactivated successfully. May 13 12:37:00.368234 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. May 13 12:37:00.370767 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 13 12:37:00.370808 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 13 12:37:00.373379 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 13 12:37:00.373443 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 13 12:37:00.381454 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 13 12:37:00.381568 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 13 12:37:00.383443 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 13 12:37:00.385504 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 13 12:37:00.411213 systemd[1]: Switching root. May 13 12:37:00.440690 systemd-journald[244]: Journal stopped May 13 12:37:01.184157 systemd-journald[244]: Received SIGTERM from PID 1 (systemd). May 13 12:37:01.184206 kernel: SELinux: policy capability network_peer_controls=1 May 13 12:37:01.184221 kernel: SELinux: policy capability open_perms=1 May 13 12:37:01.184231 kernel: SELinux: policy capability extended_socket_class=1 May 13 12:37:01.184243 kernel: SELinux: policy capability always_check_network=0 May 13 12:37:01.184253 kernel: SELinux: policy capability cgroup_seclabel=1 May 13 12:37:01.184263 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 13 12:37:01.184277 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 13 12:37:01.184287 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 13 12:37:01.184300 kernel: SELinux: policy capability userspace_initial_context=0 May 13 12:37:01.184313 kernel: audit: type=1403 audit(1747139820.588:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 13 12:37:01.184324 systemd[1]: Successfully loaded SELinux policy in 43.044ms. May 13 12:37:01.184337 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 9.178ms. May 13 12:37:01.184348 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 13 12:37:01.184359 systemd[1]: Detected virtualization kvm. May 13 12:37:01.184369 systemd[1]: Detected architecture arm64. May 13 12:37:01.184383 systemd[1]: Detected first boot. May 13 12:37:01.184393 systemd[1]: Initializing machine ID from VM UUID. May 13 12:37:01.184404 kernel: NET: Registered PF_VSOCK protocol family May 13 12:37:01.184414 zram_generator::config[1089]: No configuration found. May 13 12:37:01.184426 systemd[1]: Populated /etc with preset unit settings. May 13 12:37:01.184437 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 13 12:37:01.184447 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 13 12:37:01.184457 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 13 12:37:01.184468 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 13 12:37:01.184478 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 13 12:37:01.184488 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 13 12:37:01.184498 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 13 12:37:01.184508 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 13 12:37:01.184518 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 13 12:37:01.184528 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 13 12:37:01.184538 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 13 12:37:01.184548 systemd[1]: Created slice user.slice - User and Session Slice. May 13 12:37:01.184560 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 13 12:37:01.184571 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 13 12:37:01.184581 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 13 12:37:01.184590 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 13 12:37:01.184601 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 13 12:37:01.184612 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 13 12:37:01.184622 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... May 13 12:37:01.184632 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 13 12:37:01.184645 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 13 12:37:01.184655 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 13 12:37:01.184665 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 13 12:37:01.184684 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 13 12:37:01.184696 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 13 12:37:01.184706 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 13 12:37:01.184717 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 13 12:37:01.184727 systemd[1]: Reached target slices.target - Slice Units. May 13 12:37:01.184736 systemd[1]: Reached target swap.target - Swaps. May 13 12:37:01.184748 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 13 12:37:01.184759 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 13 12:37:01.184769 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 13 12:37:01.184780 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 13 12:37:01.184790 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 13 12:37:01.184800 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 13 12:37:01.184811 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 13 12:37:01.184820 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 13 12:37:01.184831 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 13 12:37:01.184842 systemd[1]: Mounting media.mount - External Media Directory... May 13 12:37:01.184852 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 13 12:37:01.184862 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 13 12:37:01.184873 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 13 12:37:01.184883 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 13 12:37:01.184903 systemd[1]: Reached target machines.target - Containers. May 13 12:37:01.184914 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 13 12:37:01.184924 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 13 12:37:01.184936 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 13 12:37:01.184946 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 13 12:37:01.184956 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 13 12:37:01.184966 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 13 12:37:01.184976 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 13 12:37:01.184987 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 13 12:37:01.184997 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 13 12:37:01.185007 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 13 12:37:01.185019 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 13 12:37:01.185029 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 13 12:37:01.185039 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 13 12:37:01.185049 systemd[1]: Stopped systemd-fsck-usr.service. May 13 12:37:01.185059 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 13 12:37:01.185069 systemd[1]: Starting systemd-journald.service - Journal Service... May 13 12:37:01.185079 kernel: loop: module loaded May 13 12:37:01.185089 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 13 12:37:01.185099 kernel: fuse: init (API version 7.41) May 13 12:37:01.185110 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 13 12:37:01.185120 kernel: ACPI: bus type drm_connector registered May 13 12:37:01.185130 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 13 12:37:01.185140 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 13 12:37:01.185150 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 13 12:37:01.185162 systemd[1]: verity-setup.service: Deactivated successfully. May 13 12:37:01.185172 systemd[1]: Stopped verity-setup.service. May 13 12:37:01.185182 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 13 12:37:01.185192 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 13 12:37:01.185201 systemd[1]: Mounted media.mount - External Media Directory. May 13 12:37:01.185211 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 13 12:37:01.185221 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 13 12:37:01.185230 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 13 12:37:01.185240 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 13 12:37:01.185251 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 13 12:37:01.185263 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 13 12:37:01.185273 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 13 12:37:01.185283 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 12:37:01.185316 systemd-journald[1157]: Collecting audit messages is disabled. May 13 12:37:01.185340 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 13 12:37:01.185350 systemd[1]: modprobe@drm.service: Deactivated successfully. May 13 12:37:01.185360 systemd-journald[1157]: Journal started May 13 12:37:01.185381 systemd-journald[1157]: Runtime Journal (/run/log/journal/e2e64570d8ae4f14af56ff4f16ea0556) is 6M, max 48.5M, 42.4M free. May 13 12:37:00.961856 systemd[1]: Queued start job for default target multi-user.target. May 13 12:37:00.985823 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 13 12:37:00.986203 systemd[1]: systemd-journald.service: Deactivated successfully. May 13 12:37:01.186520 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 13 12:37:01.189056 systemd[1]: Started systemd-journald.service - Journal Service. May 13 12:37:01.189771 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 12:37:01.190089 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 13 12:37:01.191310 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 13 12:37:01.191486 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 13 12:37:01.192631 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 12:37:01.192802 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 13 12:37:01.194178 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 13 12:37:01.195367 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 13 12:37:01.196712 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 13 12:37:01.198288 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 13 12:37:01.208340 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 13 12:37:01.215307 systemd[1]: Reached target network-pre.target - Preparation for Network. May 13 12:37:01.217529 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 13 12:37:01.219374 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 13 12:37:01.220333 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 13 12:37:01.220369 systemd[1]: Reached target local-fs.target - Local File Systems. May 13 12:37:01.222093 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 13 12:37:01.228760 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 13 12:37:01.230154 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 12:37:01.231599 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 13 12:37:01.233740 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 13 12:37:01.234960 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 12:37:01.237390 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 13 12:37:01.238454 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 13 12:37:01.239528 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 13 12:37:01.243984 systemd-journald[1157]: Time spent on flushing to /var/log/journal/e2e64570d8ae4f14af56ff4f16ea0556 is 14.886ms for 882 entries. May 13 12:37:01.243984 systemd-journald[1157]: System Journal (/var/log/journal/e2e64570d8ae4f14af56ff4f16ea0556) is 8M, max 195.6M, 187.6M free. May 13 12:37:01.273456 systemd-journald[1157]: Received client request to flush runtime journal. May 13 12:37:01.273492 kernel: loop0: detected capacity change from 0 to 194096 May 13 12:37:01.245421 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 13 12:37:01.250035 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 13 12:37:01.252356 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 13 12:37:01.253499 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 13 12:37:01.255085 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 13 12:37:01.258074 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 13 12:37:01.260538 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 13 12:37:01.276923 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 13 12:37:01.279970 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 13 12:37:01.288288 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 13 12:37:01.291956 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 13 12:37:01.294501 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 13 12:37:01.295755 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 13 12:37:01.303908 kernel: loop1: detected capacity change from 0 to 107312 May 13 12:37:01.317151 systemd-tmpfiles[1221]: ACLs are not supported, ignoring. May 13 12:37:01.317170 systemd-tmpfiles[1221]: ACLs are not supported, ignoring. May 13 12:37:01.321226 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 13 12:37:01.323908 kernel: loop2: detected capacity change from 0 to 138376 May 13 12:37:01.346916 kernel: loop3: detected capacity change from 0 to 194096 May 13 12:37:01.352908 kernel: loop4: detected capacity change from 0 to 107312 May 13 12:37:01.357904 kernel: loop5: detected capacity change from 0 to 138376 May 13 12:37:01.362541 (sd-merge)[1228]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. May 13 12:37:01.362949 (sd-merge)[1228]: Merged extensions into '/usr'. May 13 12:37:01.366263 systemd[1]: Reload requested from client PID 1206 ('systemd-sysext') (unit systemd-sysext.service)... May 13 12:37:01.366277 systemd[1]: Reloading... May 13 12:37:01.422923 zram_generator::config[1257]: No configuration found. May 13 12:37:01.498931 ldconfig[1201]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 13 12:37:01.501457 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 12:37:01.564571 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 13 12:37:01.564962 systemd[1]: Reloading finished in 198 ms. May 13 12:37:01.580902 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 13 12:37:01.582115 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 13 12:37:01.594120 systemd[1]: Starting ensure-sysext.service... May 13 12:37:01.595797 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 13 12:37:01.607882 systemd[1]: Reload requested from client PID 1288 ('systemctl') (unit ensure-sysext.service)... May 13 12:37:01.607909 systemd[1]: Reloading... May 13 12:37:01.616162 systemd-tmpfiles[1289]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. May 13 12:37:01.616197 systemd-tmpfiles[1289]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. May 13 12:37:01.616422 systemd-tmpfiles[1289]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 13 12:37:01.616611 systemd-tmpfiles[1289]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 13 12:37:01.617253 systemd-tmpfiles[1289]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 13 12:37:01.617460 systemd-tmpfiles[1289]: ACLs are not supported, ignoring. May 13 12:37:01.617508 systemd-tmpfiles[1289]: ACLs are not supported, ignoring. May 13 12:37:01.620221 systemd-tmpfiles[1289]: Detected autofs mount point /boot during canonicalization of boot. May 13 12:37:01.620234 systemd-tmpfiles[1289]: Skipping /boot May 13 12:37:01.628789 systemd-tmpfiles[1289]: Detected autofs mount point /boot during canonicalization of boot. May 13 12:37:01.628808 systemd-tmpfiles[1289]: Skipping /boot May 13 12:37:01.660923 zram_generator::config[1316]: No configuration found. May 13 12:37:01.729051 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 12:37:01.793623 systemd[1]: Reloading finished in 185 ms. May 13 12:37:01.819320 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 13 12:37:01.835848 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 13 12:37:01.842695 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 13 12:37:01.844805 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 13 12:37:01.854124 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 13 12:37:01.856630 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 13 12:37:01.858845 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 13 12:37:01.861203 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 13 12:37:01.866783 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 13 12:37:01.869177 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 13 12:37:01.878141 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 13 12:37:01.880201 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 13 12:37:01.881023 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 12:37:01.881141 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 13 12:37:01.884026 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 12:37:01.884182 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 13 12:37:01.886134 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 13 12:37:01.888441 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 12:37:01.888628 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 13 12:37:01.895527 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 13 12:37:01.895735 systemd-udevd[1357]: Using default interface naming scheme 'v255'. May 13 12:37:01.897067 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 13 12:37:01.899347 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 13 12:37:01.900384 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 12:37:01.900541 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 13 12:37:01.904169 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 13 12:37:01.915660 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 13 12:37:01.918379 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 13 12:37:01.919652 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 13 12:37:01.921736 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 12:37:01.921899 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 13 12:37:01.923274 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 12:37:01.923577 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 13 12:37:01.925624 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 12:37:01.925789 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 13 12:37:01.929302 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 13 12:37:01.934154 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 13 12:37:01.948881 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 13 12:37:01.949857 augenrules[1416]: No rules May 13 12:37:01.951131 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 13 12:37:01.953738 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 13 12:37:01.957249 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 13 12:37:01.962974 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 13 12:37:01.963790 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 12:37:01.963908 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 13 12:37:01.965877 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 13 12:37:01.967404 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 13 12:37:01.969317 systemd[1]: audit-rules.service: Deactivated successfully. May 13 12:37:01.971423 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 13 12:37:01.972820 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 12:37:01.972992 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 13 12:37:01.974390 systemd[1]: modprobe@drm.service: Deactivated successfully. May 13 12:37:01.974577 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 13 12:37:01.976241 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 12:37:01.976391 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 13 12:37:01.977685 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 12:37:01.977819 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 13 12:37:01.981320 systemd[1]: Finished ensure-sysext.service. May 13 12:37:01.992793 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. May 13 12:37:01.993019 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 12:37:01.993068 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 13 12:37:01.994880 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 13 12:37:02.034849 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 13 12:37:02.037416 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 13 12:37:02.048993 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 13 12:37:02.080929 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 13 12:37:02.118307 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 13 12:37:02.119590 systemd[1]: Reached target time-set.target - System Time Set. May 13 12:37:02.133748 systemd-networkd[1429]: lo: Link UP May 13 12:37:02.133755 systemd-networkd[1429]: lo: Gained carrier May 13 12:37:02.135580 systemd-resolved[1356]: Positive Trust Anchors: May 13 12:37:02.135598 systemd-resolved[1356]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 13 12:37:02.135630 systemd-resolved[1356]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 13 12:37:02.137149 systemd-networkd[1429]: Enumeration completed May 13 12:37:02.137291 systemd[1]: Started systemd-networkd.service - Network Configuration. May 13 12:37:02.137712 systemd-networkd[1429]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 12:37:02.137722 systemd-networkd[1429]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 13 12:37:02.138210 systemd-networkd[1429]: eth0: Link UP May 13 12:37:02.138385 systemd-networkd[1429]: eth0: Gained carrier May 13 12:37:02.138406 systemd-networkd[1429]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 12:37:02.141511 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 13 12:37:02.143068 systemd-resolved[1356]: Defaulting to hostname 'linux'. May 13 12:37:02.145508 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 13 12:37:02.146721 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 13 12:37:02.148028 systemd[1]: Reached target network.target - Network. May 13 12:37:02.148672 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 13 12:37:02.150017 systemd[1]: Reached target sysinit.target - System Initialization. May 13 12:37:02.150863 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 13 12:37:02.152976 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 13 12:37:02.154014 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 13 12:37:02.155092 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 13 12:37:02.155986 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 13 12:37:02.156848 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 13 12:37:02.156877 systemd[1]: Reached target paths.target - Path Units. May 13 12:37:02.157526 systemd[1]: Reached target timers.target - Timer Units. May 13 12:37:02.159094 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 13 12:37:02.161051 systemd[1]: Starting docker.socket - Docker Socket for the API... May 13 12:37:02.163817 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 13 12:37:02.165585 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 13 12:37:02.165949 systemd-networkd[1429]: eth0: DHCPv4 address 10.0.0.46/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 13 12:37:02.168020 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 13 12:37:02.169267 systemd-timesyncd[1437]: Network configuration changed, trying to establish connection. May 13 12:37:02.170481 systemd-timesyncd[1437]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 13 12:37:02.170960 systemd-timesyncd[1437]: Initial clock synchronization to Tue 2025-05-13 12:37:02.250269 UTC. May 13 12:37:02.171078 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 13 12:37:02.172205 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 13 12:37:02.173728 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 13 12:37:02.181448 systemd[1]: Reached target sockets.target - Socket Units. May 13 12:37:02.182199 systemd[1]: Reached target basic.target - Basic System. May 13 12:37:02.182937 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 13 12:37:02.182968 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 13 12:37:02.183862 systemd[1]: Starting containerd.service - containerd container runtime... May 13 12:37:02.187041 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 13 12:37:02.188632 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 13 12:37:02.190372 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 13 12:37:02.193116 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 13 12:37:02.193849 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 13 12:37:02.194750 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 13 12:37:02.198476 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 13 12:37:02.201065 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 13 12:37:02.203101 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 13 12:37:02.205915 jq[1477]: false May 13 12:37:02.205722 systemd[1]: Starting systemd-logind.service - User Login Management... May 13 12:37:02.207456 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 12:37:02.209289 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 13 12:37:02.209693 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 13 12:37:02.211106 systemd[1]: Starting update-engine.service - Update Engine... May 13 12:37:02.214075 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 13 12:37:02.215709 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 13 12:37:02.216360 extend-filesystems[1478]: Found loop3 May 13 12:37:02.217345 extend-filesystems[1478]: Found loop4 May 13 12:37:02.217345 extend-filesystems[1478]: Found loop5 May 13 12:37:02.217345 extend-filesystems[1478]: Found vda May 13 12:37:02.217345 extend-filesystems[1478]: Found vda1 May 13 12:37:02.217345 extend-filesystems[1478]: Found vda2 May 13 12:37:02.217345 extend-filesystems[1478]: Found vda3 May 13 12:37:02.217345 extend-filesystems[1478]: Found usr May 13 12:37:02.217345 extend-filesystems[1478]: Found vda4 May 13 12:37:02.217345 extend-filesystems[1478]: Found vda6 May 13 12:37:02.217345 extend-filesystems[1478]: Found vda7 May 13 12:37:02.217345 extend-filesystems[1478]: Found vda9 May 13 12:37:02.217345 extend-filesystems[1478]: Checking size of /dev/vda9 May 13 12:37:02.238540 extend-filesystems[1478]: Resized partition /dev/vda9 May 13 12:37:02.219347 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 13 12:37:02.239661 extend-filesystems[1509]: resize2fs 1.47.2 (1-Jan-2025) May 13 12:37:02.220495 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 13 12:37:02.241140 jq[1489]: true May 13 12:37:02.220648 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 13 12:37:02.222200 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 13 12:37:02.222366 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 13 12:37:02.232812 systemd[1]: motdgen.service: Deactivated successfully. May 13 12:37:02.241163 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 13 12:37:02.243979 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 13 12:37:02.257770 jq[1499]: true May 13 12:37:02.268169 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 13 12:37:02.281611 extend-filesystems[1509]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 13 12:37:02.281611 extend-filesystems[1509]: old_desc_blocks = 1, new_desc_blocks = 1 May 13 12:37:02.281611 extend-filesystems[1509]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 13 12:37:02.292957 extend-filesystems[1478]: Resized filesystem in /dev/vda9 May 13 12:37:02.281659 (ntainerd)[1515]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 13 12:37:02.283359 systemd[1]: extend-filesystems.service: Deactivated successfully. May 13 12:37:02.283541 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 13 12:37:02.301609 update_engine[1488]: I20250513 12:37:02.301398 1488 main.cc:92] Flatcar Update Engine starting May 13 12:37:02.306282 dbus-daemon[1475]: [system] SELinux support is enabled May 13 12:37:02.309905 update_engine[1488]: I20250513 12:37:02.309854 1488 update_check_scheduler.cc:74] Next update check in 2m27s May 13 12:37:02.323480 systemd-logind[1485]: Watching system buttons on /dev/input/event0 (Power Button) May 13 12:37:02.323748 systemd-logind[1485]: New seat seat0. May 13 12:37:02.332842 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 13 12:37:02.335880 systemd[1]: Started systemd-logind.service - User Login Management. May 13 12:37:02.337205 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 13 12:37:02.340519 bash[1538]: Updated "/home/core/.ssh/authorized_keys" May 13 12:37:02.342444 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 13 12:37:02.349014 tar[1496]: linux-arm64/helm May 13 12:37:02.350467 dbus-daemon[1475]: [system] Successfully activated service 'org.freedesktop.systemd1' May 13 12:37:02.351077 systemd[1]: Started update-engine.service - Update Engine. May 13 12:37:02.353227 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 13 12:37:02.353613 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 13 12:37:02.353903 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 13 12:37:02.355538 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 13 12:37:02.355723 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 13 12:37:02.360296 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 13 12:37:02.412036 locksmithd[1542]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 13 12:37:02.493403 containerd[1515]: time="2025-05-13T12:37:02Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 May 13 12:37:02.494446 containerd[1515]: time="2025-05-13T12:37:02.494407200Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 May 13 12:37:02.505423 containerd[1515]: time="2025-05-13T12:37:02.505372280Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="10.24µs" May 13 12:37:02.505423 containerd[1515]: time="2025-05-13T12:37:02.505416840Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 May 13 12:37:02.505523 containerd[1515]: time="2025-05-13T12:37:02.505435720Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 May 13 12:37:02.505621 containerd[1515]: time="2025-05-13T12:37:02.505600520Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 May 13 12:37:02.505646 containerd[1515]: time="2025-05-13T12:37:02.505624480Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 May 13 12:37:02.505675 containerd[1515]: time="2025-05-13T12:37:02.505649520Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 13 12:37:02.505732 containerd[1515]: time="2025-05-13T12:37:02.505711280Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 13 12:37:02.505732 containerd[1515]: time="2025-05-13T12:37:02.505729520Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 13 12:37:02.506028 containerd[1515]: time="2025-05-13T12:37:02.506001240Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 13 12:37:02.506028 containerd[1515]: time="2025-05-13T12:37:02.506026200Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 13 12:37:02.506069 containerd[1515]: time="2025-05-13T12:37:02.506039360Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 13 12:37:02.506069 containerd[1515]: time="2025-05-13T12:37:02.506047760Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 May 13 12:37:02.506149 containerd[1515]: time="2025-05-13T12:37:02.506129320Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 May 13 12:37:02.506383 containerd[1515]: time="2025-05-13T12:37:02.506360360Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 13 12:37:02.506411 containerd[1515]: time="2025-05-13T12:37:02.506401040Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 13 12:37:02.506429 containerd[1515]: time="2025-05-13T12:37:02.506413080Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 May 13 12:37:02.506474 containerd[1515]: time="2025-05-13T12:37:02.506459320Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 May 13 12:37:02.506800 containerd[1515]: time="2025-05-13T12:37:02.506777920Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 May 13 12:37:02.506878 containerd[1515]: time="2025-05-13T12:37:02.506859120Z" level=info msg="metadata content store policy set" policy=shared May 13 12:37:02.519044 containerd[1515]: time="2025-05-13T12:37:02.519009040Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 May 13 12:37:02.519089 containerd[1515]: time="2025-05-13T12:37:02.519073440Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 May 13 12:37:02.519113 containerd[1515]: time="2025-05-13T12:37:02.519092520Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 May 13 12:37:02.519113 containerd[1515]: time="2025-05-13T12:37:02.519105080Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 May 13 12:37:02.519183 containerd[1515]: time="2025-05-13T12:37:02.519167200Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 May 13 12:37:02.519225 containerd[1515]: time="2025-05-13T12:37:02.519187480Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 May 13 12:37:02.519225 containerd[1515]: time="2025-05-13T12:37:02.519202240Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 May 13 12:37:02.519225 containerd[1515]: time="2025-05-13T12:37:02.519214840Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 May 13 12:37:02.519273 containerd[1515]: time="2025-05-13T12:37:02.519227560Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 May 13 12:37:02.519273 containerd[1515]: time="2025-05-13T12:37:02.519248040Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 May 13 12:37:02.519273 containerd[1515]: time="2025-05-13T12:37:02.519261120Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 May 13 12:37:02.519322 containerd[1515]: time="2025-05-13T12:37:02.519279200Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 May 13 12:37:02.519924 containerd[1515]: time="2025-05-13T12:37:02.519430800Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 May 13 12:37:02.519924 containerd[1515]: time="2025-05-13T12:37:02.519459920Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 May 13 12:37:02.519924 containerd[1515]: time="2025-05-13T12:37:02.519491080Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 May 13 12:37:02.519924 containerd[1515]: time="2025-05-13T12:37:02.519504640Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 May 13 12:37:02.519924 containerd[1515]: time="2025-05-13T12:37:02.519516160Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 May 13 12:37:02.519924 containerd[1515]: time="2025-05-13T12:37:02.519527040Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 May 13 12:37:02.519924 containerd[1515]: time="2025-05-13T12:37:02.519538000Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 May 13 12:37:02.519924 containerd[1515]: time="2025-05-13T12:37:02.519550000Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 May 13 12:37:02.519924 containerd[1515]: time="2025-05-13T12:37:02.519571320Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 May 13 12:37:02.519924 containerd[1515]: time="2025-05-13T12:37:02.519583320Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 May 13 12:37:02.519924 containerd[1515]: time="2025-05-13T12:37:02.519597320Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 May 13 12:37:02.521168 containerd[1515]: time="2025-05-13T12:37:02.519800200Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" May 13 12:37:02.521230 containerd[1515]: time="2025-05-13T12:37:02.521175480Z" level=info msg="Start snapshots syncer" May 13 12:37:02.521230 containerd[1515]: time="2025-05-13T12:37:02.521219480Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 May 13 12:37:02.521568 containerd[1515]: time="2025-05-13T12:37:02.521523160Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" May 13 12:37:02.521690 containerd[1515]: time="2025-05-13T12:37:02.521582240Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 May 13 12:37:02.521715 containerd[1515]: time="2025-05-13T12:37:02.521686760Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 May 13 12:37:02.521843 containerd[1515]: time="2025-05-13T12:37:02.521817400Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 May 13 12:37:02.521875 containerd[1515]: time="2025-05-13T12:37:02.521859040Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 May 13 12:37:02.521908 containerd[1515]: time="2025-05-13T12:37:02.521877640Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 May 13 12:37:02.521928 containerd[1515]: time="2025-05-13T12:37:02.521908080Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 May 13 12:37:02.521946 containerd[1515]: time="2025-05-13T12:37:02.521927400Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 May 13 12:37:02.521980 containerd[1515]: time="2025-05-13T12:37:02.521943600Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 May 13 12:37:02.521980 containerd[1515]: time="2025-05-13T12:37:02.521959120Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 May 13 12:37:02.522013 containerd[1515]: time="2025-05-13T12:37:02.522002600Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 May 13 12:37:02.522057 containerd[1515]: time="2025-05-13T12:37:02.522019960Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 May 13 12:37:02.522057 containerd[1515]: time="2025-05-13T12:37:02.522036040Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 May 13 12:37:02.522109 containerd[1515]: time="2025-05-13T12:37:02.522089480Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 13 12:37:02.522141 containerd[1515]: time="2025-05-13T12:37:02.522114720Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 13 12:37:02.522141 containerd[1515]: time="2025-05-13T12:37:02.522128160Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 13 12:37:02.522175 containerd[1515]: time="2025-05-13T12:37:02.522142440Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 13 12:37:02.522175 containerd[1515]: time="2025-05-13T12:37:02.522152440Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 May 13 12:37:02.522175 containerd[1515]: time="2025-05-13T12:37:02.522167160Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 May 13 12:37:02.522252 containerd[1515]: time="2025-05-13T12:37:02.522182320Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 May 13 12:37:02.522324 containerd[1515]: time="2025-05-13T12:37:02.522306520Z" level=info msg="runtime interface created" May 13 12:37:02.522324 containerd[1515]: time="2025-05-13T12:37:02.522321760Z" level=info msg="created NRI interface" May 13 12:37:02.522378 containerd[1515]: time="2025-05-13T12:37:02.522332680Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 May 13 12:37:02.522378 containerd[1515]: time="2025-05-13T12:37:02.522349040Z" level=info msg="Connect containerd service" May 13 12:37:02.522415 containerd[1515]: time="2025-05-13T12:37:02.522388520Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 13 12:37:02.523439 containerd[1515]: time="2025-05-13T12:37:02.523396080Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 13 12:37:02.628958 containerd[1515]: time="2025-05-13T12:37:02.627424720Z" level=info msg="Start subscribing containerd event" May 13 12:37:02.628958 containerd[1515]: time="2025-05-13T12:37:02.627561080Z" level=info msg="Start recovering state" May 13 12:37:02.628958 containerd[1515]: time="2025-05-13T12:37:02.627648400Z" level=info msg="Start event monitor" May 13 12:37:02.628958 containerd[1515]: time="2025-05-13T12:37:02.627661040Z" level=info msg="Start cni network conf syncer for default" May 13 12:37:02.628958 containerd[1515]: time="2025-05-13T12:37:02.627686720Z" level=info msg="Start streaming server" May 13 12:37:02.628958 containerd[1515]: time="2025-05-13T12:37:02.627694800Z" level=info msg="Registered namespace \"k8s.io\" with NRI" May 13 12:37:02.628958 containerd[1515]: time="2025-05-13T12:37:02.627735000Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 13 12:37:02.628958 containerd[1515]: time="2025-05-13T12:37:02.627780240Z" level=info msg=serving... address=/run/containerd/containerd.sock May 13 12:37:02.628958 containerd[1515]: time="2025-05-13T12:37:02.627789920Z" level=info msg="runtime interface starting up..." May 13 12:37:02.628958 containerd[1515]: time="2025-05-13T12:37:02.627804080Z" level=info msg="starting plugins..." May 13 12:37:02.628958 containerd[1515]: time="2025-05-13T12:37:02.627822360Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" May 13 12:37:02.628958 containerd[1515]: time="2025-05-13T12:37:02.628194280Z" level=info msg="containerd successfully booted in 0.135134s" May 13 12:37:02.628231 systemd[1]: Started containerd.service - containerd container runtime. May 13 12:37:02.672107 sshd_keygen[1514]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 13 12:37:02.691926 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 13 12:37:02.694301 systemd[1]: Starting issuegen.service - Generate /run/issue... May 13 12:37:02.698573 tar[1496]: linux-arm64/LICENSE May 13 12:37:02.698626 tar[1496]: linux-arm64/README.md May 13 12:37:02.708859 systemd[1]: issuegen.service: Deactivated successfully. May 13 12:37:02.709072 systemd[1]: Finished issuegen.service - Generate /run/issue. May 13 12:37:02.710222 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 13 12:37:02.713443 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 13 12:37:02.723328 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 13 12:37:02.725729 systemd[1]: Started getty@tty1.service - Getty on tty1. May 13 12:37:02.727584 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. May 13 12:37:02.728622 systemd[1]: Reached target getty.target - Login Prompts. May 13 12:37:03.277349 systemd-networkd[1429]: eth0: Gained IPv6LL May 13 12:37:03.280169 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 13 12:37:03.281997 systemd[1]: Reached target network-online.target - Network is Online. May 13 12:37:03.284992 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... May 13 12:37:03.287244 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 12:37:03.289146 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 13 12:37:03.317235 systemd[1]: coreos-metadata.service: Deactivated successfully. May 13 12:37:03.317445 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. May 13 12:37:03.318740 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 13 12:37:03.321322 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 13 12:37:03.787914 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 12:37:03.789145 systemd[1]: Reached target multi-user.target - Multi-User System. May 13 12:37:03.790034 systemd[1]: Startup finished in 2.060s (kernel) + 4.963s (initrd) + 3.247s (userspace) = 10.272s. May 13 12:37:03.792667 (kubelet)[1610]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 12:37:04.291530 kubelet[1610]: E0513 12:37:04.291479 1610 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 12:37:04.294076 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 12:37:04.294213 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 12:37:04.294666 systemd[1]: kubelet.service: Consumed 812ms CPU time, 238.5M memory peak. May 13 12:37:08.129227 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 13 12:37:08.130325 systemd[1]: Started sshd@0-10.0.0.46:22-10.0.0.1:52484.service - OpenSSH per-connection server daemon (10.0.0.1:52484). May 13 12:37:08.199885 sshd[1625]: Accepted publickey for core from 10.0.0.1 port 52484 ssh2: RSA SHA256:HV7SwMkgpUcGbG5PTBCNGAhaEvexdMAt2yN/TIbGAFk May 13 12:37:08.201470 sshd-session[1625]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:37:08.209329 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 13 12:37:08.210241 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 13 12:37:08.215804 systemd-logind[1485]: New session 1 of user core. May 13 12:37:08.237936 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 13 12:37:08.240390 systemd[1]: Starting user@500.service - User Manager for UID 500... May 13 12:37:08.257856 (systemd)[1629]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 13 12:37:08.260108 systemd-logind[1485]: New session c1 of user core. May 13 12:37:08.379076 systemd[1629]: Queued start job for default target default.target. May 13 12:37:08.400801 systemd[1629]: Created slice app.slice - User Application Slice. May 13 12:37:08.400839 systemd[1629]: Reached target paths.target - Paths. May 13 12:37:08.400880 systemd[1629]: Reached target timers.target - Timers. May 13 12:37:08.402095 systemd[1629]: Starting dbus.socket - D-Bus User Message Bus Socket... May 13 12:37:08.411157 systemd[1629]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 13 12:37:08.411218 systemd[1629]: Reached target sockets.target - Sockets. May 13 12:37:08.411255 systemd[1629]: Reached target basic.target - Basic System. May 13 12:37:08.411282 systemd[1629]: Reached target default.target - Main User Target. May 13 12:37:08.411309 systemd[1629]: Startup finished in 145ms. May 13 12:37:08.411473 systemd[1]: Started user@500.service - User Manager for UID 500. May 13 12:37:08.412793 systemd[1]: Started session-1.scope - Session 1 of User core. May 13 12:37:08.476196 systemd[1]: Started sshd@1-10.0.0.46:22-10.0.0.1:52500.service - OpenSSH per-connection server daemon (10.0.0.1:52500). May 13 12:37:08.535971 sshd[1640]: Accepted publickey for core from 10.0.0.1 port 52500 ssh2: RSA SHA256:HV7SwMkgpUcGbG5PTBCNGAhaEvexdMAt2yN/TIbGAFk May 13 12:37:08.537276 sshd-session[1640]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:37:08.542019 systemd-logind[1485]: New session 2 of user core. May 13 12:37:08.555065 systemd[1]: Started session-2.scope - Session 2 of User core. May 13 12:37:08.606694 sshd[1642]: Connection closed by 10.0.0.1 port 52500 May 13 12:37:08.607827 sshd-session[1640]: pam_unix(sshd:session): session closed for user core May 13 12:37:08.618917 systemd[1]: sshd@1-10.0.0.46:22-10.0.0.1:52500.service: Deactivated successfully. May 13 12:37:08.621717 systemd[1]: session-2.scope: Deactivated successfully. May 13 12:37:08.622435 systemd-logind[1485]: Session 2 logged out. Waiting for processes to exit. May 13 12:37:08.624815 systemd[1]: Started sshd@2-10.0.0.46:22-10.0.0.1:52516.service - OpenSSH per-connection server daemon (10.0.0.1:52516). May 13 12:37:08.625414 systemd-logind[1485]: Removed session 2. May 13 12:37:08.681467 sshd[1648]: Accepted publickey for core from 10.0.0.1 port 52516 ssh2: RSA SHA256:HV7SwMkgpUcGbG5PTBCNGAhaEvexdMAt2yN/TIbGAFk May 13 12:37:08.682652 sshd-session[1648]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:37:08.687251 systemd-logind[1485]: New session 3 of user core. May 13 12:37:08.702111 systemd[1]: Started session-3.scope - Session 3 of User core. May 13 12:37:08.749674 sshd[1650]: Connection closed by 10.0.0.1 port 52516 May 13 12:37:08.749955 sshd-session[1648]: pam_unix(sshd:session): session closed for user core May 13 12:37:08.762938 systemd[1]: sshd@2-10.0.0.46:22-10.0.0.1:52516.service: Deactivated successfully. May 13 12:37:08.764650 systemd[1]: session-3.scope: Deactivated successfully. May 13 12:37:08.765497 systemd-logind[1485]: Session 3 logged out. Waiting for processes to exit. May 13 12:37:08.767824 systemd[1]: Started sshd@3-10.0.0.46:22-10.0.0.1:52532.service - OpenSSH per-connection server daemon (10.0.0.1:52532). May 13 12:37:08.768466 systemd-logind[1485]: Removed session 3. May 13 12:37:08.821338 sshd[1656]: Accepted publickey for core from 10.0.0.1 port 52532 ssh2: RSA SHA256:HV7SwMkgpUcGbG5PTBCNGAhaEvexdMAt2yN/TIbGAFk May 13 12:37:08.822484 sshd-session[1656]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:37:08.826790 systemd-logind[1485]: New session 4 of user core. May 13 12:37:08.837030 systemd[1]: Started session-4.scope - Session 4 of User core. May 13 12:37:08.888317 sshd[1658]: Connection closed by 10.0.0.1 port 52532 May 13 12:37:08.888661 sshd-session[1656]: pam_unix(sshd:session): session closed for user core May 13 12:37:08.901728 systemd[1]: sshd@3-10.0.0.46:22-10.0.0.1:52532.service: Deactivated successfully. May 13 12:37:08.904079 systemd[1]: session-4.scope: Deactivated successfully. May 13 12:37:08.904782 systemd-logind[1485]: Session 4 logged out. Waiting for processes to exit. May 13 12:37:08.906760 systemd[1]: Started sshd@4-10.0.0.46:22-10.0.0.1:52546.service - OpenSSH per-connection server daemon (10.0.0.1:52546). May 13 12:37:08.907566 systemd-logind[1485]: Removed session 4. May 13 12:37:08.953039 sshd[1664]: Accepted publickey for core from 10.0.0.1 port 52546 ssh2: RSA SHA256:HV7SwMkgpUcGbG5PTBCNGAhaEvexdMAt2yN/TIbGAFk May 13 12:37:08.954323 sshd-session[1664]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:37:08.958467 systemd-logind[1485]: New session 5 of user core. May 13 12:37:08.968028 systemd[1]: Started session-5.scope - Session 5 of User core. May 13 12:37:09.031678 sudo[1667]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 13 12:37:09.033795 sudo[1667]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 12:37:09.050536 sudo[1667]: pam_unix(sudo:session): session closed for user root May 13 12:37:09.051893 sshd[1666]: Connection closed by 10.0.0.1 port 52546 May 13 12:37:09.052391 sshd-session[1664]: pam_unix(sshd:session): session closed for user core May 13 12:37:09.062771 systemd[1]: sshd@4-10.0.0.46:22-10.0.0.1:52546.service: Deactivated successfully. May 13 12:37:09.064295 systemd[1]: session-5.scope: Deactivated successfully. May 13 12:37:09.065049 systemd-logind[1485]: Session 5 logged out. Waiting for processes to exit. May 13 12:37:09.068232 systemd[1]: Started sshd@5-10.0.0.46:22-10.0.0.1:52548.service - OpenSSH per-connection server daemon (10.0.0.1:52548). May 13 12:37:09.068708 systemd-logind[1485]: Removed session 5. May 13 12:37:09.115404 sshd[1673]: Accepted publickey for core from 10.0.0.1 port 52548 ssh2: RSA SHA256:HV7SwMkgpUcGbG5PTBCNGAhaEvexdMAt2yN/TIbGAFk May 13 12:37:09.116558 sshd-session[1673]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:37:09.120083 systemd-logind[1485]: New session 6 of user core. May 13 12:37:09.131107 systemd[1]: Started session-6.scope - Session 6 of User core. May 13 12:37:09.180533 sudo[1677]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 13 12:37:09.180803 sudo[1677]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 12:37:09.245441 sudo[1677]: pam_unix(sudo:session): session closed for user root May 13 12:37:09.250478 sudo[1676]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 13 12:37:09.250731 sudo[1676]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 12:37:09.259312 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 13 12:37:09.303715 augenrules[1699]: No rules May 13 12:37:09.304966 systemd[1]: audit-rules.service: Deactivated successfully. May 13 12:37:09.305205 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 13 12:37:09.306416 sudo[1676]: pam_unix(sudo:session): session closed for user root May 13 12:37:09.307932 sshd[1675]: Connection closed by 10.0.0.1 port 52548 May 13 12:37:09.307921 sshd-session[1673]: pam_unix(sshd:session): session closed for user core May 13 12:37:09.319634 systemd[1]: sshd@5-10.0.0.46:22-10.0.0.1:52548.service: Deactivated successfully. May 13 12:37:09.321945 systemd[1]: session-6.scope: Deactivated successfully. May 13 12:37:09.322548 systemd-logind[1485]: Session 6 logged out. Waiting for processes to exit. May 13 12:37:09.324782 systemd[1]: Started sshd@6-10.0.0.46:22-10.0.0.1:52550.service - OpenSSH per-connection server daemon (10.0.0.1:52550). May 13 12:37:09.325267 systemd-logind[1485]: Removed session 6. May 13 12:37:09.378222 sshd[1708]: Accepted publickey for core from 10.0.0.1 port 52550 ssh2: RSA SHA256:HV7SwMkgpUcGbG5PTBCNGAhaEvexdMAt2yN/TIbGAFk May 13 12:37:09.379337 sshd-session[1708]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:37:09.382988 systemd-logind[1485]: New session 7 of user core. May 13 12:37:09.394034 systemd[1]: Started session-7.scope - Session 7 of User core. May 13 12:37:09.443432 sudo[1711]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 13 12:37:09.443711 sudo[1711]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 12:37:09.802371 systemd[1]: Starting docker.service - Docker Application Container Engine... May 13 12:37:09.813231 (dockerd)[1731]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 13 12:37:10.091059 dockerd[1731]: time="2025-05-13T12:37:10.090931972Z" level=info msg="Starting up" May 13 12:37:10.092947 dockerd[1731]: time="2025-05-13T12:37:10.092211223Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" May 13 12:37:10.124928 systemd[1]: var-lib-docker-metacopy\x2dcheck4034662604-merged.mount: Deactivated successfully. May 13 12:37:10.134423 dockerd[1731]: time="2025-05-13T12:37:10.134380516Z" level=info msg="Loading containers: start." May 13 12:37:10.148699 kernel: Initializing XFRM netlink socket May 13 12:37:10.340835 systemd-networkd[1429]: docker0: Link UP May 13 12:37:10.345408 dockerd[1731]: time="2025-05-13T12:37:10.345152386Z" level=info msg="Loading containers: done." May 13 12:37:10.359041 dockerd[1731]: time="2025-05-13T12:37:10.358997227Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 13 12:37:10.359158 dockerd[1731]: time="2025-05-13T12:37:10.359072506Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 May 13 12:37:10.359189 dockerd[1731]: time="2025-05-13T12:37:10.359164036Z" level=info msg="Initializing buildkit" May 13 12:37:10.380375 dockerd[1731]: time="2025-05-13T12:37:10.380340493Z" level=info msg="Completed buildkit initialization" May 13 12:37:10.386640 dockerd[1731]: time="2025-05-13T12:37:10.386557781Z" level=info msg="Daemon has completed initialization" May 13 12:37:10.387007 systemd[1]: Started docker.service - Docker Application Container Engine. May 13 12:37:10.387298 dockerd[1731]: time="2025-05-13T12:37:10.387119961Z" level=info msg="API listen on /run/docker.sock" May 13 12:37:11.114956 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3878477838-merged.mount: Deactivated successfully. May 13 12:37:11.307984 containerd[1515]: time="2025-05-13T12:37:11.307948556Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\"" May 13 12:37:11.953778 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2074217106.mount: Deactivated successfully. May 13 12:37:13.060723 containerd[1515]: time="2025-05-13T12:37:13.060613778Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:37:13.061510 containerd[1515]: time="2025-05-13T12:37:13.061482406Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.12: active requests=0, bytes read=29794152" May 13 12:37:13.062163 containerd[1515]: time="2025-05-13T12:37:13.062107999Z" level=info msg="ImageCreate event name:\"sha256:afbe230ec4abc2c9e87f7fbe7814bde21dbe30f03252c8861c4ca9510cb43ec6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:37:13.064475 containerd[1515]: time="2025-05-13T12:37:13.064417027Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:37:13.065827 containerd[1515]: time="2025-05-13T12:37:13.065776296Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.12\" with image id \"sha256:afbe230ec4abc2c9e87f7fbe7814bde21dbe30f03252c8861c4ca9510cb43ec6\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\", size \"29790950\" in 1.757790512s" May 13 12:37:13.065827 containerd[1515]: time="2025-05-13T12:37:13.065818847Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\" returns image reference \"sha256:afbe230ec4abc2c9e87f7fbe7814bde21dbe30f03252c8861c4ca9510cb43ec6\"" May 13 12:37:13.081833 containerd[1515]: time="2025-05-13T12:37:13.081768165Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\"" May 13 12:37:14.392620 containerd[1515]: time="2025-05-13T12:37:14.392564527Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:37:14.393702 containerd[1515]: time="2025-05-13T12:37:14.393666384Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.12: active requests=0, bytes read=26855552" May 13 12:37:14.394738 containerd[1515]: time="2025-05-13T12:37:14.394678301Z" level=info msg="ImageCreate event name:\"sha256:3df23260c56ff58d759f8a841c67846184e97ce81a269549ca8d14b36da14c14\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:37:14.397406 containerd[1515]: time="2025-05-13T12:37:14.397375063Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:37:14.398358 containerd[1515]: time="2025-05-13T12:37:14.398332647Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.12\" with image id \"sha256:3df23260c56ff58d759f8a841c67846184e97ce81a269549ca8d14b36da14c14\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\", size \"28297111\" in 1.316530996s" May 13 12:37:14.398403 containerd[1515]: time="2025-05-13T12:37:14.398362840Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\" returns image reference \"sha256:3df23260c56ff58d759f8a841c67846184e97ce81a269549ca8d14b36da14c14\"" May 13 12:37:14.413348 containerd[1515]: time="2025-05-13T12:37:14.413284126Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\"" May 13 12:37:14.544546 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 13 12:37:14.545913 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 12:37:14.678109 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 12:37:14.681285 (kubelet)[2033]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 12:37:14.721732 kubelet[2033]: E0513 12:37:14.721663 2033 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 12:37:14.724791 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 12:37:14.724939 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 12:37:14.726027 systemd[1]: kubelet.service: Consumed 137ms CPU time, 94.5M memory peak. May 13 12:37:15.414413 containerd[1515]: time="2025-05-13T12:37:15.414205224Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:37:15.415006 containerd[1515]: time="2025-05-13T12:37:15.414970701Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.12: active requests=0, bytes read=16263947" May 13 12:37:15.415965 containerd[1515]: time="2025-05-13T12:37:15.415931987Z" level=info msg="ImageCreate event name:\"sha256:fb0f5dac5fa74463b801d11598454c00462609b582d17052195012e5f682c2ba\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:37:15.418781 containerd[1515]: time="2025-05-13T12:37:15.418743680Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:37:15.419933 containerd[1515]: time="2025-05-13T12:37:15.419770237Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.12\" with image id \"sha256:fb0f5dac5fa74463b801d11598454c00462609b582d17052195012e5f682c2ba\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\", size \"17705524\" in 1.006313409s" May 13 12:37:15.419933 containerd[1515]: time="2025-05-13T12:37:15.419813055Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\" returns image reference \"sha256:fb0f5dac5fa74463b801d11598454c00462609b582d17052195012e5f682c2ba\"" May 13 12:37:15.436566 containerd[1515]: time="2025-05-13T12:37:15.436447315Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\"" May 13 12:37:16.385030 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2494395629.mount: Deactivated successfully. May 13 12:37:16.675334 containerd[1515]: time="2025-05-13T12:37:16.675219252Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:37:16.675818 containerd[1515]: time="2025-05-13T12:37:16.675771159Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.12: active requests=0, bytes read=25775707" May 13 12:37:16.676486 containerd[1515]: time="2025-05-13T12:37:16.676448417Z" level=info msg="ImageCreate event name:\"sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:37:16.678305 containerd[1515]: time="2025-05-13T12:37:16.678272863Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:37:16.678802 containerd[1515]: time="2025-05-13T12:37:16.678765483Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.12\" with image id \"sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed\", repo tag \"registry.k8s.io/kube-proxy:v1.30.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\", size \"25774724\" in 1.242206835s" May 13 12:37:16.678834 containerd[1515]: time="2025-05-13T12:37:16.678800359Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\" returns image reference \"sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed\"" May 13 12:37:16.693867 containerd[1515]: time="2025-05-13T12:37:16.693835957Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" May 13 12:37:17.204363 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1776883194.mount: Deactivated successfully. May 13 12:37:17.762120 containerd[1515]: time="2025-05-13T12:37:17.762070356Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:37:17.763077 containerd[1515]: time="2025-05-13T12:37:17.763048209Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485383" May 13 12:37:17.763801 containerd[1515]: time="2025-05-13T12:37:17.763751869Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:37:17.766717 containerd[1515]: time="2025-05-13T12:37:17.766676089Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:37:17.768046 containerd[1515]: time="2025-05-13T12:37:17.767687370Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.073819386s" May 13 12:37:17.768088 containerd[1515]: time="2025-05-13T12:37:17.768050783Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" May 13 12:37:17.783280 containerd[1515]: time="2025-05-13T12:37:17.783238470Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" May 13 12:37:18.181567 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2792579445.mount: Deactivated successfully. May 13 12:37:18.184933 containerd[1515]: time="2025-05-13T12:37:18.184265000Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:37:18.185578 containerd[1515]: time="2025-05-13T12:37:18.185553518Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268823" May 13 12:37:18.186469 containerd[1515]: time="2025-05-13T12:37:18.186447168Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:37:18.188623 containerd[1515]: time="2025-05-13T12:37:18.188580404Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:37:18.189425 containerd[1515]: time="2025-05-13T12:37:18.189396028Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 406.120243ms" May 13 12:37:18.189547 containerd[1515]: time="2025-05-13T12:37:18.189520102Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" May 13 12:37:18.204280 containerd[1515]: time="2025-05-13T12:37:18.204248128Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" May 13 12:37:18.662082 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount827744156.mount: Deactivated successfully. May 13 12:37:20.268414 containerd[1515]: time="2025-05-13T12:37:20.268346963Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:37:20.269987 containerd[1515]: time="2025-05-13T12:37:20.269951992Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=66191474" May 13 12:37:20.270846 containerd[1515]: time="2025-05-13T12:37:20.270785859Z" level=info msg="ImageCreate event name:\"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:37:20.273241 containerd[1515]: time="2025-05-13T12:37:20.273172949Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:37:20.274356 containerd[1515]: time="2025-05-13T12:37:20.274274622Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"66189079\" in 2.06999127s" May 13 12:37:20.274356 containerd[1515]: time="2025-05-13T12:37:20.274305273Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\"" May 13 12:37:24.851316 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 13 12:37:24.853087 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 12:37:25.028209 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 12:37:25.030848 (kubelet)[2285]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 12:37:25.073948 kubelet[2285]: E0513 12:37:25.073886 2285 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 12:37:25.076230 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 12:37:25.076332 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 12:37:25.076747 systemd[1]: kubelet.service: Consumed 127ms CPU time, 94.6M memory peak. May 13 12:37:26.336442 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 13 12:37:26.336741 systemd[1]: kubelet.service: Consumed 127ms CPU time, 94.6M memory peak. May 13 12:37:26.338773 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 12:37:26.352572 systemd[1]: Reload requested from client PID 2300 ('systemctl') (unit session-7.scope)... May 13 12:37:26.352590 systemd[1]: Reloading... May 13 12:37:26.423993 zram_generator::config[2344]: No configuration found. May 13 12:37:26.523584 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 12:37:26.607585 systemd[1]: Reloading finished in 254 ms. May 13 12:37:26.646196 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 12:37:26.649877 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 13 12:37:26.663453 systemd[1]: kubelet.service: Deactivated successfully. May 13 12:37:26.664920 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 13 12:37:26.664956 systemd[1]: kubelet.service: Consumed 79ms CPU time, 82.5M memory peak. May 13 12:37:26.666292 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 12:37:26.766228 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 12:37:26.770486 (kubelet)[2391]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 13 12:37:26.807997 kubelet[2391]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 12:37:26.808239 kubelet[2391]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 13 12:37:26.808290 kubelet[2391]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 12:37:26.809146 kubelet[2391]: I0513 12:37:26.809108 2391 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 13 12:37:27.307610 kubelet[2391]: I0513 12:37:27.307584 2391 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 13 12:37:27.307717 kubelet[2391]: I0513 12:37:27.307708 2391 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 13 12:37:27.307958 kubelet[2391]: I0513 12:37:27.307943 2391 server.go:927] "Client rotation is on, will bootstrap in background" May 13 12:37:27.344767 kubelet[2391]: I0513 12:37:27.344733 2391 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 13 12:37:27.344964 kubelet[2391]: E0513 12:37:27.344944 2391 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.46:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.46:6443: connect: connection refused May 13 12:37:27.355028 kubelet[2391]: I0513 12:37:27.354999 2391 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 13 12:37:27.356121 kubelet[2391]: I0513 12:37:27.356072 2391 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 13 12:37:27.356279 kubelet[2391]: I0513 12:37:27.356118 2391 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 13 12:37:27.356414 kubelet[2391]: I0513 12:37:27.356338 2391 topology_manager.go:138] "Creating topology manager with none policy" May 13 12:37:27.356414 kubelet[2391]: I0513 12:37:27.356346 2391 container_manager_linux.go:301] "Creating device plugin manager" May 13 12:37:27.356609 kubelet[2391]: I0513 12:37:27.356593 2391 state_mem.go:36] "Initialized new in-memory state store" May 13 12:37:27.357657 kubelet[2391]: I0513 12:37:27.357634 2391 kubelet.go:400] "Attempting to sync node with API server" May 13 12:37:27.357657 kubelet[2391]: I0513 12:37:27.357656 2391 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 13 12:37:27.358012 kubelet[2391]: I0513 12:37:27.357991 2391 kubelet.go:312] "Adding apiserver pod source" May 13 12:37:27.358260 kubelet[2391]: I0513 12:37:27.358242 2391 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 13 12:37:27.361691 kubelet[2391]: W0513 12:37:27.361640 2391 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.46:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.46:6443: connect: connection refused May 13 12:37:27.361736 kubelet[2391]: E0513 12:37:27.361695 2391 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.46:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.46:6443: connect: connection refused May 13 12:37:27.362006 kubelet[2391]: W0513 12:37:27.361946 2391 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.46:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.46:6443: connect: connection refused May 13 12:37:27.362006 kubelet[2391]: E0513 12:37:27.362001 2391 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.46:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.46:6443: connect: connection refused May 13 12:37:27.362586 kubelet[2391]: I0513 12:37:27.362556 2391 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 13 12:37:27.362935 kubelet[2391]: I0513 12:37:27.362919 2391 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 13 12:37:27.363031 kubelet[2391]: W0513 12:37:27.363020 2391 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 13 12:37:27.365617 kubelet[2391]: I0513 12:37:27.363858 2391 server.go:1264] "Started kubelet" May 13 12:37:27.368919 kubelet[2391]: I0513 12:37:27.368763 2391 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 13 12:37:27.369006 kubelet[2391]: I0513 12:37:27.368754 2391 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 13 12:37:27.369265 kubelet[2391]: I0513 12:37:27.369252 2391 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 13 12:37:27.369429 kubelet[2391]: I0513 12:37:27.369412 2391 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 13 12:37:27.370251 kubelet[2391]: I0513 12:37:27.370190 2391 volume_manager.go:291] "Starting Kubelet Volume Manager" May 13 12:37:27.370703 kubelet[2391]: I0513 12:37:27.370651 2391 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 13 12:37:27.370838 kubelet[2391]: I0513 12:37:27.370826 2391 reconciler.go:26] "Reconciler: start to sync state" May 13 12:37:27.371865 kubelet[2391]: W0513 12:37:27.371739 2391 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.46:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.46:6443: connect: connection refused May 13 12:37:27.371865 kubelet[2391]: E0513 12:37:27.371785 2391 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.46:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.46:6443: connect: connection refused May 13 12:37:27.371865 kubelet[2391]: E0513 12:37:27.371841 2391 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.46:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.46:6443: connect: connection refused" interval="200ms" May 13 12:37:27.374211 kubelet[2391]: E0513 12:37:27.371503 2391 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.46:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.46:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183f166f370d37b7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-13 12:37:27.363835831 +0000 UTC m=+0.589840508,LastTimestamp:2025-05-13 12:37:27.363835831 +0000 UTC m=+0.589840508,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 13 12:37:27.374211 kubelet[2391]: I0513 12:37:27.373750 2391 server.go:455] "Adding debug handlers to kubelet server" May 13 12:37:27.374331 kubelet[2391]: I0513 12:37:27.374244 2391 factory.go:221] Registration of the systemd container factory successfully May 13 12:37:27.374489 kubelet[2391]: I0513 12:37:27.374442 2391 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 13 12:37:27.375862 kubelet[2391]: E0513 12:37:27.375845 2391 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 13 12:37:27.376150 kubelet[2391]: I0513 12:37:27.376137 2391 factory.go:221] Registration of the containerd container factory successfully May 13 12:37:27.385403 kubelet[2391]: I0513 12:37:27.385376 2391 cpu_manager.go:214] "Starting CPU manager" policy="none" May 13 12:37:27.385403 kubelet[2391]: I0513 12:37:27.385407 2391 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 13 12:37:27.385507 kubelet[2391]: I0513 12:37:27.385427 2391 state_mem.go:36] "Initialized new in-memory state store" May 13 12:37:27.389278 kubelet[2391]: I0513 12:37:27.389128 2391 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 13 12:37:27.390077 kubelet[2391]: I0513 12:37:27.390059 2391 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 13 12:37:27.390180 kubelet[2391]: I0513 12:37:27.390171 2391 status_manager.go:217] "Starting to sync pod status with apiserver" May 13 12:37:27.390381 kubelet[2391]: I0513 12:37:27.390366 2391 kubelet.go:2337] "Starting kubelet main sync loop" May 13 12:37:27.390519 kubelet[2391]: E0513 12:37:27.390500 2391 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 13 12:37:27.391297 kubelet[2391]: W0513 12:37:27.391246 2391 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.46:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.46:6443: connect: connection refused May 13 12:37:27.391543 kubelet[2391]: E0513 12:37:27.391416 2391 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.46:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.46:6443: connect: connection refused May 13 12:37:27.449539 kubelet[2391]: I0513 12:37:27.449494 2391 policy_none.go:49] "None policy: Start" May 13 12:37:27.450344 kubelet[2391]: I0513 12:37:27.450327 2391 memory_manager.go:170] "Starting memorymanager" policy="None" May 13 12:37:27.450521 kubelet[2391]: I0513 12:37:27.450452 2391 state_mem.go:35] "Initializing new in-memory state store" May 13 12:37:27.457237 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 13 12:37:27.470902 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 13 12:37:27.472011 kubelet[2391]: I0513 12:37:27.471912 2391 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 13 12:37:27.474175 kubelet[2391]: E0513 12:37:27.474145 2391 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.46:6443/api/v1/nodes\": dial tcp 10.0.0.46:6443: connect: connection refused" node="localhost" May 13 12:37:27.474586 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 13 12:37:27.481816 kubelet[2391]: I0513 12:37:27.481670 2391 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 13 12:37:27.481870 kubelet[2391]: I0513 12:37:27.481847 2391 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 13 12:37:27.482014 kubelet[2391]: I0513 12:37:27.481989 2391 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 13 12:37:27.483259 kubelet[2391]: E0513 12:37:27.483239 2391 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 13 12:37:27.491786 kubelet[2391]: I0513 12:37:27.491716 2391 topology_manager.go:215] "Topology Admit Handler" podUID="c32685094354db7a8efdc99a6bb16702" podNamespace="kube-system" podName="kube-apiserver-localhost" May 13 12:37:27.492725 kubelet[2391]: I0513 12:37:27.492703 2391 topology_manager.go:215] "Topology Admit Handler" podUID="b20b39a8540dba87b5883a6f0f602dba" podNamespace="kube-system" podName="kube-controller-manager-localhost" May 13 12:37:27.493720 kubelet[2391]: I0513 12:37:27.493695 2391 topology_manager.go:215] "Topology Admit Handler" podUID="6ece95f10dbffa04b25ec3439a115512" podNamespace="kube-system" podName="kube-scheduler-localhost" May 13 12:37:27.499222 systemd[1]: Created slice kubepods-burstable-podc32685094354db7a8efdc99a6bb16702.slice - libcontainer container kubepods-burstable-podc32685094354db7a8efdc99a6bb16702.slice. May 13 12:37:27.511829 systemd[1]: Created slice kubepods-burstable-pod6ece95f10dbffa04b25ec3439a115512.slice - libcontainer container kubepods-burstable-pod6ece95f10dbffa04b25ec3439a115512.slice. May 13 12:37:27.515366 systemd[1]: Created slice kubepods-burstable-podb20b39a8540dba87b5883a6f0f602dba.slice - libcontainer container kubepods-burstable-podb20b39a8540dba87b5883a6f0f602dba.slice. May 13 12:37:27.571335 kubelet[2391]: I0513 12:37:27.571258 2391 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 13 12:37:27.571335 kubelet[2391]: I0513 12:37:27.571288 2391 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 13 12:37:27.571507 kubelet[2391]: I0513 12:37:27.571484 2391 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 13 12:37:27.571573 kubelet[2391]: I0513 12:37:27.571512 2391 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6ece95f10dbffa04b25ec3439a115512-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6ece95f10dbffa04b25ec3439a115512\") " pod="kube-system/kube-scheduler-localhost" May 13 12:37:27.571573 kubelet[2391]: I0513 12:37:27.571532 2391 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c32685094354db7a8efdc99a6bb16702-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"c32685094354db7a8efdc99a6bb16702\") " pod="kube-system/kube-apiserver-localhost" May 13 12:37:27.571573 kubelet[2391]: I0513 12:37:27.571547 2391 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c32685094354db7a8efdc99a6bb16702-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"c32685094354db7a8efdc99a6bb16702\") " pod="kube-system/kube-apiserver-localhost" May 13 12:37:27.571573 kubelet[2391]: I0513 12:37:27.571562 2391 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c32685094354db7a8efdc99a6bb16702-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"c32685094354db7a8efdc99a6bb16702\") " pod="kube-system/kube-apiserver-localhost" May 13 12:37:27.571739 kubelet[2391]: I0513 12:37:27.571591 2391 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 13 12:37:27.571739 kubelet[2391]: I0513 12:37:27.571605 2391 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 13 12:37:27.572229 kubelet[2391]: E0513 12:37:27.572177 2391 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.46:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.46:6443: connect: connection refused" interval="400ms" May 13 12:37:27.675599 kubelet[2391]: I0513 12:37:27.675570 2391 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 13 12:37:27.675915 kubelet[2391]: E0513 12:37:27.675875 2391 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.46:6443/api/v1/nodes\": dial tcp 10.0.0.46:6443: connect: connection refused" node="localhost" May 13 12:37:27.809534 containerd[1515]: time="2025-05-13T12:37:27.809444027Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:c32685094354db7a8efdc99a6bb16702,Namespace:kube-system,Attempt:0,}" May 13 12:37:27.814572 containerd[1515]: time="2025-05-13T12:37:27.814529094Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6ece95f10dbffa04b25ec3439a115512,Namespace:kube-system,Attempt:0,}" May 13 12:37:27.818431 containerd[1515]: time="2025-05-13T12:37:27.818391145Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b20b39a8540dba87b5883a6f0f602dba,Namespace:kube-system,Attempt:0,}" May 13 12:37:27.973050 kubelet[2391]: E0513 12:37:27.972968 2391 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.46:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.46:6443: connect: connection refused" interval="800ms" May 13 12:37:28.078392 kubelet[2391]: I0513 12:37:28.078347 2391 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 13 12:37:28.078716 kubelet[2391]: E0513 12:37:28.078679 2391 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.46:6443/api/v1/nodes\": dial tcp 10.0.0.46:6443: connect: connection refused" node="localhost" May 13 12:37:28.247665 kubelet[2391]: W0513 12:37:28.247553 2391 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.46:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.46:6443: connect: connection refused May 13 12:37:28.247665 kubelet[2391]: E0513 12:37:28.247613 2391 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.46:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.46:6443: connect: connection refused May 13 12:37:28.355051 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3071209268.mount: Deactivated successfully. May 13 12:37:28.358847 containerd[1515]: time="2025-05-13T12:37:28.358799532Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 12:37:28.359526 containerd[1515]: time="2025-05-13T12:37:28.359492540Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" May 13 12:37:28.361758 containerd[1515]: time="2025-05-13T12:37:28.361706098Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 12:37:28.362617 containerd[1515]: time="2025-05-13T12:37:28.362587053Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 12:37:28.363018 containerd[1515]: time="2025-05-13T12:37:28.362985689Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" May 13 12:37:28.363417 containerd[1515]: time="2025-05-13T12:37:28.363386567Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 12:37:28.364060 containerd[1515]: time="2025-05-13T12:37:28.364039015Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" May 13 12:37:28.367074 containerd[1515]: time="2025-05-13T12:37:28.367030546Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 12:37:28.367781 containerd[1515]: time="2025-05-13T12:37:28.367712103Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 556.15688ms" May 13 12:37:28.370783 containerd[1515]: time="2025-05-13T12:37:28.370729460Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 554.535843ms" May 13 12:37:28.371442 containerd[1515]: time="2025-05-13T12:37:28.371413780Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 551.171194ms" May 13 12:37:28.389847 containerd[1515]: time="2025-05-13T12:37:28.389499742Z" level=info msg="connecting to shim dd29b678b62e14c927c0216856e9828032314ce6d3789991bb903485c978cc94" address="unix:///run/containerd/s/0140d0ec77c248493b2e34f7a7990b04cff9df23d6f80d800cc5c6ebe2c4a4c5" namespace=k8s.io protocol=ttrpc version=3 May 13 12:37:28.392366 containerd[1515]: time="2025-05-13T12:37:28.392122907Z" level=info msg="connecting to shim fb16e81a7c7bf66b69a541b9f6dada684364b53c9e550c5b71a9db6637dd7213" address="unix:///run/containerd/s/00100a3792d92753ed54bf0909672a64d74b3e541e7ded084f22c6f45fc8d647" namespace=k8s.io protocol=ttrpc version=3 May 13 12:37:28.398735 containerd[1515]: time="2025-05-13T12:37:28.398686987Z" level=info msg="connecting to shim b46cc38a2a194a3b8964636e315eb56d2ea8c65da5682373565cccb9361f5cdb" address="unix:///run/containerd/s/40f2d3bf0aac11f52bd0347f8f23a73e2ca85112c1f5fcb63782acc59573d720" namespace=k8s.io protocol=ttrpc version=3 May 13 12:37:28.411052 systemd[1]: Started cri-containerd-dd29b678b62e14c927c0216856e9828032314ce6d3789991bb903485c978cc94.scope - libcontainer container dd29b678b62e14c927c0216856e9828032314ce6d3789991bb903485c978cc94. May 13 12:37:28.414628 systemd[1]: Started cri-containerd-fb16e81a7c7bf66b69a541b9f6dada684364b53c9e550c5b71a9db6637dd7213.scope - libcontainer container fb16e81a7c7bf66b69a541b9f6dada684364b53c9e550c5b71a9db6637dd7213. May 13 12:37:28.418013 systemd[1]: Started cri-containerd-b46cc38a2a194a3b8964636e315eb56d2ea8c65da5682373565cccb9361f5cdb.scope - libcontainer container b46cc38a2a194a3b8964636e315eb56d2ea8c65da5682373565cccb9361f5cdb. May 13 12:37:28.447346 containerd[1515]: time="2025-05-13T12:37:28.447296103Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b20b39a8540dba87b5883a6f0f602dba,Namespace:kube-system,Attempt:0,} returns sandbox id \"fb16e81a7c7bf66b69a541b9f6dada684364b53c9e550c5b71a9db6637dd7213\"" May 13 12:37:28.455278 containerd[1515]: time="2025-05-13T12:37:28.455229703Z" level=info msg="CreateContainer within sandbox \"fb16e81a7c7bf66b69a541b9f6dada684364b53c9e550c5b71a9db6637dd7213\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 13 12:37:28.459680 containerd[1515]: time="2025-05-13T12:37:28.459644728Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:c32685094354db7a8efdc99a6bb16702,Namespace:kube-system,Attempt:0,} returns sandbox id \"dd29b678b62e14c927c0216856e9828032314ce6d3789991bb903485c978cc94\"" May 13 12:37:28.461318 containerd[1515]: time="2025-05-13T12:37:28.461287319Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6ece95f10dbffa04b25ec3439a115512,Namespace:kube-system,Attempt:0,} returns sandbox id \"b46cc38a2a194a3b8964636e315eb56d2ea8c65da5682373565cccb9361f5cdb\"" May 13 12:37:28.462345 containerd[1515]: time="2025-05-13T12:37:28.462082509Z" level=info msg="CreateContainer within sandbox \"dd29b678b62e14c927c0216856e9828032314ce6d3789991bb903485c978cc94\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 13 12:37:28.463576 containerd[1515]: time="2025-05-13T12:37:28.463540877Z" level=info msg="CreateContainer within sandbox \"b46cc38a2a194a3b8964636e315eb56d2ea8c65da5682373565cccb9361f5cdb\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 13 12:37:28.468025 containerd[1515]: time="2025-05-13T12:37:28.467995421Z" level=info msg="Container fd469b6d02d9e2d91614e90febcb49bdeabc15dc59ec3e7326659bb41e4e4fdc: CDI devices from CRI Config.CDIDevices: []" May 13 12:37:28.469339 containerd[1515]: time="2025-05-13T12:37:28.469299597Z" level=info msg="Container 41db8531f4afcba9ff763ae4b6ec8265768575d46dffb9592ee870726b29d6dc: CDI devices from CRI Config.CDIDevices: []" May 13 12:37:28.471898 containerd[1515]: time="2025-05-13T12:37:28.471802082Z" level=info msg="Container 0cf131dcef3541b0d01e41de73c7ca15fd56af4a4d95e2fbbca547b79373164c: CDI devices from CRI Config.CDIDevices: []" May 13 12:37:28.477499 containerd[1515]: time="2025-05-13T12:37:28.477468550Z" level=info msg="CreateContainer within sandbox \"fb16e81a7c7bf66b69a541b9f6dada684364b53c9e550c5b71a9db6637dd7213\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"fd469b6d02d9e2d91614e90febcb49bdeabc15dc59ec3e7326659bb41e4e4fdc\"" May 13 12:37:28.478165 containerd[1515]: time="2025-05-13T12:37:28.478135172Z" level=info msg="StartContainer for \"fd469b6d02d9e2d91614e90febcb49bdeabc15dc59ec3e7326659bb41e4e4fdc\"" May 13 12:37:28.479123 containerd[1515]: time="2025-05-13T12:37:28.479074625Z" level=info msg="CreateContainer within sandbox \"dd29b678b62e14c927c0216856e9828032314ce6d3789991bb903485c978cc94\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"41db8531f4afcba9ff763ae4b6ec8265768575d46dffb9592ee870726b29d6dc\"" May 13 12:37:28.479233 containerd[1515]: time="2025-05-13T12:37:28.479127397Z" level=info msg="connecting to shim fd469b6d02d9e2d91614e90febcb49bdeabc15dc59ec3e7326659bb41e4e4fdc" address="unix:///run/containerd/s/00100a3792d92753ed54bf0909672a64d74b3e541e7ded084f22c6f45fc8d647" protocol=ttrpc version=3 May 13 12:37:28.479531 containerd[1515]: time="2025-05-13T12:37:28.479461609Z" level=info msg="StartContainer for \"41db8531f4afcba9ff763ae4b6ec8265768575d46dffb9592ee870726b29d6dc\"" May 13 12:37:28.480656 containerd[1515]: time="2025-05-13T12:37:28.480633013Z" level=info msg="connecting to shim 41db8531f4afcba9ff763ae4b6ec8265768575d46dffb9592ee870726b29d6dc" address="unix:///run/containerd/s/0140d0ec77c248493b2e34f7a7990b04cff9df23d6f80d800cc5c6ebe2c4a4c5" protocol=ttrpc version=3 May 13 12:37:28.480881 containerd[1515]: time="2025-05-13T12:37:28.480850028Z" level=info msg="CreateContainer within sandbox \"b46cc38a2a194a3b8964636e315eb56d2ea8c65da5682373565cccb9361f5cdb\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"0cf131dcef3541b0d01e41de73c7ca15fd56af4a4d95e2fbbca547b79373164c\"" May 13 12:37:28.481249 containerd[1515]: time="2025-05-13T12:37:28.481223159Z" level=info msg="StartContainer for \"0cf131dcef3541b0d01e41de73c7ca15fd56af4a4d95e2fbbca547b79373164c\"" May 13 12:37:28.482127 containerd[1515]: time="2025-05-13T12:37:28.482100910Z" level=info msg="connecting to shim 0cf131dcef3541b0d01e41de73c7ca15fd56af4a4d95e2fbbca547b79373164c" address="unix:///run/containerd/s/40f2d3bf0aac11f52bd0347f8f23a73e2ca85112c1f5fcb63782acc59573d720" protocol=ttrpc version=3 May 13 12:37:28.500038 systemd[1]: Started cri-containerd-fd469b6d02d9e2d91614e90febcb49bdeabc15dc59ec3e7326659bb41e4e4fdc.scope - libcontainer container fd469b6d02d9e2d91614e90febcb49bdeabc15dc59ec3e7326659bb41e4e4fdc. May 13 12:37:28.504790 systemd[1]: Started cri-containerd-0cf131dcef3541b0d01e41de73c7ca15fd56af4a4d95e2fbbca547b79373164c.scope - libcontainer container 0cf131dcef3541b0d01e41de73c7ca15fd56af4a4d95e2fbbca547b79373164c. May 13 12:37:28.505760 systemd[1]: Started cri-containerd-41db8531f4afcba9ff763ae4b6ec8265768575d46dffb9592ee870726b29d6dc.scope - libcontainer container 41db8531f4afcba9ff763ae4b6ec8265768575d46dffb9592ee870726b29d6dc. May 13 12:37:28.543131 containerd[1515]: time="2025-05-13T12:37:28.543000394Z" level=info msg="StartContainer for \"fd469b6d02d9e2d91614e90febcb49bdeabc15dc59ec3e7326659bb41e4e4fdc\" returns successfully" May 13 12:37:28.544783 containerd[1515]: time="2025-05-13T12:37:28.544757619Z" level=info msg="StartContainer for \"41db8531f4afcba9ff763ae4b6ec8265768575d46dffb9592ee870726b29d6dc\" returns successfully" May 13 12:37:28.560339 containerd[1515]: time="2025-05-13T12:37:28.557368183Z" level=info msg="StartContainer for \"0cf131dcef3541b0d01e41de73c7ca15fd56af4a4d95e2fbbca547b79373164c\" returns successfully" May 13 12:37:28.571459 kubelet[2391]: W0513 12:37:28.571396 2391 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.46:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.46:6443: connect: connection refused May 13 12:37:28.571512 kubelet[2391]: E0513 12:37:28.571469 2391 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.46:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.46:6443: connect: connection refused May 13 12:37:28.699695 kubelet[2391]: W0513 12:37:28.699606 2391 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.46:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.46:6443: connect: connection refused May 13 12:37:28.699695 kubelet[2391]: E0513 12:37:28.699672 2391 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.46:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.46:6443: connect: connection refused May 13 12:37:28.880968 kubelet[2391]: I0513 12:37:28.880688 2391 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 13 12:37:30.351175 kubelet[2391]: E0513 12:37:30.351132 2391 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" May 13 12:37:30.363476 kubelet[2391]: I0513 12:37:30.363449 2391 apiserver.go:52] "Watching apiserver" May 13 12:37:30.450904 kubelet[2391]: I0513 12:37:30.450140 2391 kubelet_node_status.go:76] "Successfully registered node" node="localhost" May 13 12:37:30.471178 kubelet[2391]: I0513 12:37:30.470865 2391 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 13 12:37:30.554953 kubelet[2391]: E0513 12:37:30.554919 2391 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" May 13 12:37:32.224055 systemd[1]: Reload requested from client PID 2677 ('systemctl') (unit session-7.scope)... May 13 12:37:32.224072 systemd[1]: Reloading... May 13 12:37:32.294938 zram_generator::config[2720]: No configuration found. May 13 12:37:32.365227 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 12:37:32.463927 systemd[1]: Reloading finished in 239 ms. May 13 12:37:32.493659 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 13 12:37:32.508923 systemd[1]: kubelet.service: Deactivated successfully. May 13 12:37:32.509544 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 13 12:37:32.509604 systemd[1]: kubelet.service: Consumed 973ms CPU time, 112.4M memory peak. May 13 12:37:32.511380 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 12:37:32.647742 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 12:37:32.651304 (kubelet)[2762]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 13 12:37:32.692303 kubelet[2762]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 12:37:32.692303 kubelet[2762]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 13 12:37:32.692303 kubelet[2762]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 12:37:32.692615 kubelet[2762]: I0513 12:37:32.692346 2762 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 13 12:37:32.698756 kubelet[2762]: I0513 12:37:32.698241 2762 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 13 12:37:32.698756 kubelet[2762]: I0513 12:37:32.698264 2762 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 13 12:37:32.698756 kubelet[2762]: I0513 12:37:32.698414 2762 server.go:927] "Client rotation is on, will bootstrap in background" May 13 12:37:32.699679 kubelet[2762]: I0513 12:37:32.699656 2762 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 13 12:37:32.700926 kubelet[2762]: I0513 12:37:32.700900 2762 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 13 12:37:32.705535 kubelet[2762]: I0513 12:37:32.705499 2762 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 13 12:37:32.705717 kubelet[2762]: I0513 12:37:32.705675 2762 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 13 12:37:32.705856 kubelet[2762]: I0513 12:37:32.705700 2762 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 13 12:37:32.705856 kubelet[2762]: I0513 12:37:32.705852 2762 topology_manager.go:138] "Creating topology manager with none policy" May 13 12:37:32.705964 kubelet[2762]: I0513 12:37:32.705860 2762 container_manager_linux.go:301] "Creating device plugin manager" May 13 12:37:32.705964 kubelet[2762]: I0513 12:37:32.705924 2762 state_mem.go:36] "Initialized new in-memory state store" May 13 12:37:32.706083 kubelet[2762]: I0513 12:37:32.706061 2762 kubelet.go:400] "Attempting to sync node with API server" May 13 12:37:32.706083 kubelet[2762]: I0513 12:37:32.706078 2762 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 13 12:37:32.706129 kubelet[2762]: I0513 12:37:32.706101 2762 kubelet.go:312] "Adding apiserver pod source" May 13 12:37:32.706129 kubelet[2762]: I0513 12:37:32.706113 2762 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 13 12:37:32.706923 kubelet[2762]: I0513 12:37:32.706903 2762 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 13 12:37:32.707137 kubelet[2762]: I0513 12:37:32.707124 2762 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 13 12:37:32.708024 kubelet[2762]: I0513 12:37:32.708002 2762 server.go:1264] "Started kubelet" May 13 12:37:32.708589 kubelet[2762]: I0513 12:37:32.708556 2762 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 13 12:37:32.712466 kubelet[2762]: I0513 12:37:32.709198 2762 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 13 12:37:32.712466 kubelet[2762]: I0513 12:37:32.712402 2762 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 13 12:37:32.714295 kubelet[2762]: I0513 12:37:32.714166 2762 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 13 12:37:32.716219 kubelet[2762]: I0513 12:37:32.716203 2762 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 13 12:37:32.716661 kubelet[2762]: I0513 12:37:32.716634 2762 server.go:455] "Adding debug handlers to kubelet server" May 13 12:37:32.720243 kubelet[2762]: I0513 12:37:32.720205 2762 factory.go:221] Registration of the systemd container factory successfully May 13 12:37:32.720308 kubelet[2762]: I0513 12:37:32.720292 2762 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 13 12:37:32.721897 kubelet[2762]: I0513 12:37:32.720620 2762 volume_manager.go:291] "Starting Kubelet Volume Manager" May 13 12:37:32.721897 kubelet[2762]: I0513 12:37:32.720820 2762 reconciler.go:26] "Reconciler: start to sync state" May 13 12:37:32.721897 kubelet[2762]: I0513 12:37:32.721312 2762 factory.go:221] Registration of the containerd container factory successfully May 13 12:37:32.721897 kubelet[2762]: E0513 12:37:32.721356 2762 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 13 12:37:32.729010 kubelet[2762]: I0513 12:37:32.728974 2762 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 13 12:37:32.730306 kubelet[2762]: I0513 12:37:32.730040 2762 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 13 12:37:32.730306 kubelet[2762]: I0513 12:37:32.730070 2762 status_manager.go:217] "Starting to sync pod status with apiserver" May 13 12:37:32.730306 kubelet[2762]: I0513 12:37:32.730086 2762 kubelet.go:2337] "Starting kubelet main sync loop" May 13 12:37:32.730306 kubelet[2762]: E0513 12:37:32.730119 2762 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 13 12:37:32.766163 kubelet[2762]: I0513 12:37:32.766080 2762 cpu_manager.go:214] "Starting CPU manager" policy="none" May 13 12:37:32.766163 kubelet[2762]: I0513 12:37:32.766101 2762 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 13 12:37:32.766163 kubelet[2762]: I0513 12:37:32.766136 2762 state_mem.go:36] "Initialized new in-memory state store" May 13 12:37:32.766309 kubelet[2762]: I0513 12:37:32.766277 2762 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 13 12:37:32.766331 kubelet[2762]: I0513 12:37:32.766289 2762 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 13 12:37:32.766331 kubelet[2762]: I0513 12:37:32.766319 2762 policy_none.go:49] "None policy: Start" May 13 12:37:32.766913 kubelet[2762]: I0513 12:37:32.766879 2762 memory_manager.go:170] "Starting memorymanager" policy="None" May 13 12:37:32.766981 kubelet[2762]: I0513 12:37:32.766918 2762 state_mem.go:35] "Initializing new in-memory state store" May 13 12:37:32.767080 kubelet[2762]: I0513 12:37:32.767062 2762 state_mem.go:75] "Updated machine memory state" May 13 12:37:32.771335 kubelet[2762]: I0513 12:37:32.771284 2762 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 13 12:37:32.771607 kubelet[2762]: I0513 12:37:32.771450 2762 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 13 12:37:32.771607 kubelet[2762]: I0513 12:37:32.771549 2762 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 13 12:37:32.818249 kubelet[2762]: I0513 12:37:32.818226 2762 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 13 12:37:32.824518 kubelet[2762]: I0513 12:37:32.824424 2762 kubelet_node_status.go:112] "Node was previously registered" node="localhost" May 13 12:37:32.824843 kubelet[2762]: I0513 12:37:32.824826 2762 kubelet_node_status.go:76] "Successfully registered node" node="localhost" May 13 12:37:32.830274 kubelet[2762]: I0513 12:37:32.830240 2762 topology_manager.go:215] "Topology Admit Handler" podUID="6ece95f10dbffa04b25ec3439a115512" podNamespace="kube-system" podName="kube-scheduler-localhost" May 13 12:37:32.830371 kubelet[2762]: I0513 12:37:32.830348 2762 topology_manager.go:215] "Topology Admit Handler" podUID="c32685094354db7a8efdc99a6bb16702" podNamespace="kube-system" podName="kube-apiserver-localhost" May 13 12:37:32.830412 kubelet[2762]: I0513 12:37:32.830387 2762 topology_manager.go:215] "Topology Admit Handler" podUID="b20b39a8540dba87b5883a6f0f602dba" podNamespace="kube-system" podName="kube-controller-manager-localhost" May 13 12:37:32.922632 kubelet[2762]: I0513 12:37:32.922602 2762 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 13 12:37:32.922720 kubelet[2762]: I0513 12:37:32.922634 2762 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 13 12:37:32.922720 kubelet[2762]: I0513 12:37:32.922657 2762 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 13 12:37:32.922720 kubelet[2762]: I0513 12:37:32.922673 2762 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 13 12:37:32.922720 kubelet[2762]: I0513 12:37:32.922691 2762 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6ece95f10dbffa04b25ec3439a115512-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6ece95f10dbffa04b25ec3439a115512\") " pod="kube-system/kube-scheduler-localhost" May 13 12:37:32.922720 kubelet[2762]: I0513 12:37:32.922706 2762 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c32685094354db7a8efdc99a6bb16702-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"c32685094354db7a8efdc99a6bb16702\") " pod="kube-system/kube-apiserver-localhost" May 13 12:37:32.922831 kubelet[2762]: I0513 12:37:32.922723 2762 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c32685094354db7a8efdc99a6bb16702-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"c32685094354db7a8efdc99a6bb16702\") " pod="kube-system/kube-apiserver-localhost" May 13 12:37:32.922831 kubelet[2762]: I0513 12:37:32.922738 2762 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c32685094354db7a8efdc99a6bb16702-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"c32685094354db7a8efdc99a6bb16702\") " pod="kube-system/kube-apiserver-localhost" May 13 12:37:32.922831 kubelet[2762]: I0513 12:37:32.922757 2762 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 13 12:37:33.707136 kubelet[2762]: I0513 12:37:33.707057 2762 apiserver.go:52] "Watching apiserver" May 13 12:37:33.717079 kubelet[2762]: I0513 12:37:33.717036 2762 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 13 12:37:33.762177 kubelet[2762]: E0513 12:37:33.761550 2762 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 13 12:37:33.763002 kubelet[2762]: E0513 12:37:33.762966 2762 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" May 13 12:37:33.801368 kubelet[2762]: I0513 12:37:33.801286 2762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.80126982 podStartE2EDuration="1.80126982s" podCreationTimestamp="2025-05-13 12:37:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 12:37:33.80111635 +0000 UTC m=+1.146946796" watchObservedRunningTime="2025-05-13 12:37:33.80126982 +0000 UTC m=+1.147100226" May 13 12:37:33.801626 kubelet[2762]: I0513 12:37:33.801546 2762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.801538373 podStartE2EDuration="1.801538373s" podCreationTimestamp="2025-05-13 12:37:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 12:37:33.791339356 +0000 UTC m=+1.137169802" watchObservedRunningTime="2025-05-13 12:37:33.801538373 +0000 UTC m=+1.147368819" May 13 12:37:33.809604 kubelet[2762]: I0513 12:37:33.809539 2762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.809521877 podStartE2EDuration="1.809521877s" podCreationTimestamp="2025-05-13 12:37:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 12:37:33.80874704 +0000 UTC m=+1.154577486" watchObservedRunningTime="2025-05-13 12:37:33.809521877 +0000 UTC m=+1.155352323" May 13 12:37:37.320225 sudo[1711]: pam_unix(sudo:session): session closed for user root May 13 12:37:37.321237 sshd[1710]: Connection closed by 10.0.0.1 port 52550 May 13 12:37:37.321654 sshd-session[1708]: pam_unix(sshd:session): session closed for user core May 13 12:37:37.324391 systemd[1]: sshd@6-10.0.0.46:22-10.0.0.1:52550.service: Deactivated successfully. May 13 12:37:37.326158 systemd[1]: session-7.scope: Deactivated successfully. May 13 12:37:37.326387 systemd[1]: session-7.scope: Consumed 8.074s CPU time, 248.8M memory peak. May 13 12:37:37.327875 systemd-logind[1485]: Session 7 logged out. Waiting for processes to exit. May 13 12:37:37.329339 systemd-logind[1485]: Removed session 7. May 13 12:37:47.518465 update_engine[1488]: I20250513 12:37:47.518383 1488 update_attempter.cc:509] Updating boot flags... May 13 12:37:47.865246 kubelet[2762]: I0513 12:37:47.865192 2762 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 13 12:37:47.869115 containerd[1515]: time="2025-05-13T12:37:47.869075358Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 13 12:37:47.869513 kubelet[2762]: I0513 12:37:47.869267 2762 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 13 12:37:48.624781 kubelet[2762]: I0513 12:37:48.624727 2762 topology_manager.go:215] "Topology Admit Handler" podUID="3e7b4c23-b847-42ad-a8da-fb70040dee92" podNamespace="kube-system" podName="kube-proxy-hrlf2" May 13 12:37:48.634728 systemd[1]: Created slice kubepods-besteffort-pod3e7b4c23_b847_42ad_a8da_fb70040dee92.slice - libcontainer container kubepods-besteffort-pod3e7b4c23_b847_42ad_a8da_fb70040dee92.slice. May 13 12:37:48.727757 kubelet[2762]: I0513 12:37:48.727707 2762 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/3e7b4c23-b847-42ad-a8da-fb70040dee92-kube-proxy\") pod \"kube-proxy-hrlf2\" (UID: \"3e7b4c23-b847-42ad-a8da-fb70040dee92\") " pod="kube-system/kube-proxy-hrlf2" May 13 12:37:48.727757 kubelet[2762]: I0513 12:37:48.727755 2762 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3e7b4c23-b847-42ad-a8da-fb70040dee92-xtables-lock\") pod \"kube-proxy-hrlf2\" (UID: \"3e7b4c23-b847-42ad-a8da-fb70040dee92\") " pod="kube-system/kube-proxy-hrlf2" May 13 12:37:48.727984 kubelet[2762]: I0513 12:37:48.727773 2762 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3e7b4c23-b847-42ad-a8da-fb70040dee92-lib-modules\") pod \"kube-proxy-hrlf2\" (UID: \"3e7b4c23-b847-42ad-a8da-fb70040dee92\") " pod="kube-system/kube-proxy-hrlf2" May 13 12:37:48.727984 kubelet[2762]: I0513 12:37:48.727791 2762 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dpwz7\" (UniqueName: \"kubernetes.io/projected/3e7b4c23-b847-42ad-a8da-fb70040dee92-kube-api-access-dpwz7\") pod \"kube-proxy-hrlf2\" (UID: \"3e7b4c23-b847-42ad-a8da-fb70040dee92\") " pod="kube-system/kube-proxy-hrlf2" May 13 12:37:48.948691 containerd[1515]: time="2025-05-13T12:37:48.948373574Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hrlf2,Uid:3e7b4c23-b847-42ad-a8da-fb70040dee92,Namespace:kube-system,Attempt:0,}" May 13 12:37:48.969152 containerd[1515]: time="2025-05-13T12:37:48.968945318Z" level=info msg="connecting to shim 9317a9ddc3f9dd614771fd02884a224bb6854fa056aa2317a46a980fd1562b57" address="unix:///run/containerd/s/103baa7d26f78770c19db2f6aaa123ebdc9505de8ba008b05c2aaa9887e24c22" namespace=k8s.io protocol=ttrpc version=3 May 13 12:37:48.985239 kubelet[2762]: I0513 12:37:48.985196 2762 topology_manager.go:215] "Topology Admit Handler" podUID="77f3a955-d68d-4643-b59f-b4ffa6aaac49" podNamespace="tigera-operator" podName="tigera-operator-797db67f8-rczjl" May 13 12:37:48.994607 systemd[1]: Created slice kubepods-besteffort-pod77f3a955_d68d_4643_b59f_b4ffa6aaac49.slice - libcontainer container kubepods-besteffort-pod77f3a955_d68d_4643_b59f_b4ffa6aaac49.slice. May 13 12:37:49.006040 systemd[1]: Started cri-containerd-9317a9ddc3f9dd614771fd02884a224bb6854fa056aa2317a46a980fd1562b57.scope - libcontainer container 9317a9ddc3f9dd614771fd02884a224bb6854fa056aa2317a46a980fd1562b57. May 13 12:37:49.026958 containerd[1515]: time="2025-05-13T12:37:49.026920562Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hrlf2,Uid:3e7b4c23-b847-42ad-a8da-fb70040dee92,Namespace:kube-system,Attempt:0,} returns sandbox id \"9317a9ddc3f9dd614771fd02884a224bb6854fa056aa2317a46a980fd1562b57\"" May 13 12:37:49.029411 kubelet[2762]: I0513 12:37:49.029383 2762 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mhljx\" (UniqueName: \"kubernetes.io/projected/77f3a955-d68d-4643-b59f-b4ffa6aaac49-kube-api-access-mhljx\") pod \"tigera-operator-797db67f8-rczjl\" (UID: \"77f3a955-d68d-4643-b59f-b4ffa6aaac49\") " pod="tigera-operator/tigera-operator-797db67f8-rczjl" May 13 12:37:49.029525 kubelet[2762]: I0513 12:37:49.029421 2762 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/77f3a955-d68d-4643-b59f-b4ffa6aaac49-var-lib-calico\") pod \"tigera-operator-797db67f8-rczjl\" (UID: \"77f3a955-d68d-4643-b59f-b4ffa6aaac49\") " pod="tigera-operator/tigera-operator-797db67f8-rczjl" May 13 12:37:49.032214 containerd[1515]: time="2025-05-13T12:37:49.032172268Z" level=info msg="CreateContainer within sandbox \"9317a9ddc3f9dd614771fd02884a224bb6854fa056aa2317a46a980fd1562b57\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 13 12:37:49.065634 containerd[1515]: time="2025-05-13T12:37:49.065253427Z" level=info msg="Container 24620c59fe56b1af5b566b40097085eb012eb06f469d06dcc73ca82accec5557: CDI devices from CRI Config.CDIDevices: []" May 13 12:37:49.068931 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1769635529.mount: Deactivated successfully. May 13 12:37:49.073477 containerd[1515]: time="2025-05-13T12:37:49.073441205Z" level=info msg="CreateContainer within sandbox \"9317a9ddc3f9dd614771fd02884a224bb6854fa056aa2317a46a980fd1562b57\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"24620c59fe56b1af5b566b40097085eb012eb06f469d06dcc73ca82accec5557\"" May 13 12:37:49.077901 containerd[1515]: time="2025-05-13T12:37:49.077867740Z" level=info msg="StartContainer for \"24620c59fe56b1af5b566b40097085eb012eb06f469d06dcc73ca82accec5557\"" May 13 12:37:49.079212 containerd[1515]: time="2025-05-13T12:37:49.079185397Z" level=info msg="connecting to shim 24620c59fe56b1af5b566b40097085eb012eb06f469d06dcc73ca82accec5557" address="unix:///run/containerd/s/103baa7d26f78770c19db2f6aaa123ebdc9505de8ba008b05c2aaa9887e24c22" protocol=ttrpc version=3 May 13 12:37:49.099049 systemd[1]: Started cri-containerd-24620c59fe56b1af5b566b40097085eb012eb06f469d06dcc73ca82accec5557.scope - libcontainer container 24620c59fe56b1af5b566b40097085eb012eb06f469d06dcc73ca82accec5557. May 13 12:37:49.135230 containerd[1515]: time="2025-05-13T12:37:49.135173787Z" level=info msg="StartContainer for \"24620c59fe56b1af5b566b40097085eb012eb06f469d06dcc73ca82accec5557\" returns successfully" May 13 12:37:49.298294 containerd[1515]: time="2025-05-13T12:37:49.298180445Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-797db67f8-rczjl,Uid:77f3a955-d68d-4643-b59f-b4ffa6aaac49,Namespace:tigera-operator,Attempt:0,}" May 13 12:37:49.323786 containerd[1515]: time="2025-05-13T12:37:49.323736314Z" level=info msg="connecting to shim 9c19a13be8d17dfba39fe2440de70cfaa78da681575d03b5074893d763ab1a31" address="unix:///run/containerd/s/736f0a8e6df19f9db0f85c7e0caad304c1122bd279faa8b548775e320427bac7" namespace=k8s.io protocol=ttrpc version=3 May 13 12:37:49.346056 systemd[1]: Started cri-containerd-9c19a13be8d17dfba39fe2440de70cfaa78da681575d03b5074893d763ab1a31.scope - libcontainer container 9c19a13be8d17dfba39fe2440de70cfaa78da681575d03b5074893d763ab1a31. May 13 12:37:49.375135 containerd[1515]: time="2025-05-13T12:37:49.375084595Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-797db67f8-rczjl,Uid:77f3a955-d68d-4643-b59f-b4ffa6aaac49,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"9c19a13be8d17dfba39fe2440de70cfaa78da681575d03b5074893d763ab1a31\"" May 13 12:37:49.380760 containerd[1515]: time="2025-05-13T12:37:49.380729201Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\"" May 13 12:37:50.983917 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1528678724.mount: Deactivated successfully. May 13 12:37:52.109920 containerd[1515]: time="2025-05-13T12:37:52.109867042Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:37:52.110793 containerd[1515]: time="2025-05-13T12:37:52.110393113Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.7: active requests=0, bytes read=19323084" May 13 12:37:52.111372 containerd[1515]: time="2025-05-13T12:37:52.111335232Z" level=info msg="ImageCreate event name:\"sha256:27f7c2cfac802523e44ecd16453a4cc992f6c7d610c13054f2715a7cb4370565\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:37:52.113419 containerd[1515]: time="2025-05-13T12:37:52.113386746Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:37:52.114057 containerd[1515]: time="2025-05-13T12:37:52.114019519Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.7\" with image id \"sha256:27f7c2cfac802523e44ecd16453a4cc992f6c7d610c13054f2715a7cb4370565\", repo tag \"quay.io/tigera/operator:v1.36.7\", repo digest \"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\", size \"19319079\" in 2.733252749s" May 13 12:37:52.114057 containerd[1515]: time="2025-05-13T12:37:52.114051966Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\" returns image reference \"sha256:27f7c2cfac802523e44ecd16453a4cc992f6c7d610c13054f2715a7cb4370565\"" May 13 12:37:52.118817 containerd[1515]: time="2025-05-13T12:37:52.118781045Z" level=info msg="CreateContainer within sandbox \"9c19a13be8d17dfba39fe2440de70cfaa78da681575d03b5074893d763ab1a31\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" May 13 12:37:52.124399 containerd[1515]: time="2025-05-13T12:37:52.124367985Z" level=info msg="Container d357eff3205cd28a9597109a3572321d5828d21b812d0d5690558cb1ba245adf: CDI devices from CRI Config.CDIDevices: []" May 13 12:37:52.129100 containerd[1515]: time="2025-05-13T12:37:52.129067737Z" level=info msg="CreateContainer within sandbox \"9c19a13be8d17dfba39fe2440de70cfaa78da681575d03b5074893d763ab1a31\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"d357eff3205cd28a9597109a3572321d5828d21b812d0d5690558cb1ba245adf\"" May 13 12:37:52.129703 containerd[1515]: time="2025-05-13T12:37:52.129479704Z" level=info msg="StartContainer for \"d357eff3205cd28a9597109a3572321d5828d21b812d0d5690558cb1ba245adf\"" May 13 12:37:52.130803 containerd[1515]: time="2025-05-13T12:37:52.130768216Z" level=info msg="connecting to shim d357eff3205cd28a9597109a3572321d5828d21b812d0d5690558cb1ba245adf" address="unix:///run/containerd/s/736f0a8e6df19f9db0f85c7e0caad304c1122bd279faa8b548775e320427bac7" protocol=ttrpc version=3 May 13 12:37:52.178052 systemd[1]: Started cri-containerd-d357eff3205cd28a9597109a3572321d5828d21b812d0d5690558cb1ba245adf.scope - libcontainer container d357eff3205cd28a9597109a3572321d5828d21b812d0d5690558cb1ba245adf. May 13 12:37:52.201916 containerd[1515]: time="2025-05-13T12:37:52.201852229Z" level=info msg="StartContainer for \"d357eff3205cd28a9597109a3572321d5828d21b812d0d5690558cb1ba245adf\" returns successfully" May 13 12:37:52.742270 kubelet[2762]: I0513 12:37:52.742117 2762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-hrlf2" podStartSLOduration=4.742100803 podStartE2EDuration="4.742100803s" podCreationTimestamp="2025-05-13 12:37:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 12:37:49.785117124 +0000 UTC m=+17.130947570" watchObservedRunningTime="2025-05-13 12:37:52.742100803 +0000 UTC m=+20.087931249" May 13 12:37:52.789998 kubelet[2762]: I0513 12:37:52.789925 2762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-797db67f8-rczjl" podStartSLOduration=2.049479281 podStartE2EDuration="4.789857209s" podCreationTimestamp="2025-05-13 12:37:48 +0000 UTC" firstStartedPulling="2025-05-13 12:37:49.376445784 +0000 UTC m=+16.722276230" lastFinishedPulling="2025-05-13 12:37:52.116823711 +0000 UTC m=+19.462654158" observedRunningTime="2025-05-13 12:37:52.789828563 +0000 UTC m=+20.135659009" watchObservedRunningTime="2025-05-13 12:37:52.789857209 +0000 UTC m=+20.135687735" May 13 12:37:55.812707 kubelet[2762]: I0513 12:37:55.812636 2762 topology_manager.go:215] "Topology Admit Handler" podUID="ee78474d-4c1d-4dc3-b21b-567a135db359" podNamespace="calico-system" podName="calico-typha-d9bc59ccf-2cm5r" May 13 12:37:55.819622 systemd[1]: Created slice kubepods-besteffort-podee78474d_4c1d_4dc3_b21b_567a135db359.slice - libcontainer container kubepods-besteffort-podee78474d_4c1d_4dc3_b21b_567a135db359.slice. May 13 12:37:55.851370 kubelet[2762]: I0513 12:37:55.851328 2762 topology_manager.go:215] "Topology Admit Handler" podUID="c44a3acd-72a7-481d-97bf-bd9ad94cfff7" podNamespace="calico-system" podName="calico-node-7hsdq" May 13 12:37:55.858620 systemd[1]: Created slice kubepods-besteffort-podc44a3acd_72a7_481d_97bf_bd9ad94cfff7.slice - libcontainer container kubepods-besteffort-podc44a3acd_72a7_481d_97bf_bd9ad94cfff7.slice. May 13 12:37:55.874780 kubelet[2762]: I0513 12:37:55.874746 2762 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/c44a3acd-72a7-481d-97bf-bd9ad94cfff7-cni-bin-dir\") pod \"calico-node-7hsdq\" (UID: \"c44a3acd-72a7-481d-97bf-bd9ad94cfff7\") " pod="calico-system/calico-node-7hsdq" May 13 12:37:55.874917 kubelet[2762]: I0513 12:37:55.874836 2762 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/c44a3acd-72a7-481d-97bf-bd9ad94cfff7-var-lib-calico\") pod \"calico-node-7hsdq\" (UID: \"c44a3acd-72a7-481d-97bf-bd9ad94cfff7\") " pod="calico-system/calico-node-7hsdq" May 13 12:37:55.874917 kubelet[2762]: I0513 12:37:55.874856 2762 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c44a3acd-72a7-481d-97bf-bd9ad94cfff7-lib-modules\") pod \"calico-node-7hsdq\" (UID: \"c44a3acd-72a7-481d-97bf-bd9ad94cfff7\") " pod="calico-system/calico-node-7hsdq" May 13 12:37:55.874917 kubelet[2762]: I0513 12:37:55.874901 2762 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/c44a3acd-72a7-481d-97bf-bd9ad94cfff7-cni-net-dir\") pod \"calico-node-7hsdq\" (UID: \"c44a3acd-72a7-481d-97bf-bd9ad94cfff7\") " pod="calico-system/calico-node-7hsdq" May 13 12:37:55.874985 kubelet[2762]: I0513 12:37:55.874919 2762 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c44a3acd-72a7-481d-97bf-bd9ad94cfff7-xtables-lock\") pod \"calico-node-7hsdq\" (UID: \"c44a3acd-72a7-481d-97bf-bd9ad94cfff7\") " pod="calico-system/calico-node-7hsdq" May 13 12:37:55.874985 kubelet[2762]: I0513 12:37:55.874935 2762 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c44a3acd-72a7-481d-97bf-bd9ad94cfff7-tigera-ca-bundle\") pod \"calico-node-7hsdq\" (UID: \"c44a3acd-72a7-481d-97bf-bd9ad94cfff7\") " pod="calico-system/calico-node-7hsdq" May 13 12:37:55.875120 kubelet[2762]: I0513 12:37:55.874968 2762 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/ee78474d-4c1d-4dc3-b21b-567a135db359-typha-certs\") pod \"calico-typha-d9bc59ccf-2cm5r\" (UID: \"ee78474d-4c1d-4dc3-b21b-567a135db359\") " pod="calico-system/calico-typha-d9bc59ccf-2cm5r" May 13 12:37:55.875120 kubelet[2762]: I0513 12:37:55.875097 2762 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/c44a3acd-72a7-481d-97bf-bd9ad94cfff7-cni-log-dir\") pod \"calico-node-7hsdq\" (UID: \"c44a3acd-72a7-481d-97bf-bd9ad94cfff7\") " pod="calico-system/calico-node-7hsdq" May 13 12:37:55.875189 kubelet[2762]: I0513 12:37:55.875146 2762 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/c44a3acd-72a7-481d-97bf-bd9ad94cfff7-policysync\") pod \"calico-node-7hsdq\" (UID: \"c44a3acd-72a7-481d-97bf-bd9ad94cfff7\") " pod="calico-system/calico-node-7hsdq" May 13 12:37:55.875213 kubelet[2762]: I0513 12:37:55.875188 2762 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bcjgl\" (UniqueName: \"kubernetes.io/projected/ee78474d-4c1d-4dc3-b21b-567a135db359-kube-api-access-bcjgl\") pod \"calico-typha-d9bc59ccf-2cm5r\" (UID: \"ee78474d-4c1d-4dc3-b21b-567a135db359\") " pod="calico-system/calico-typha-d9bc59ccf-2cm5r" May 13 12:37:55.875237 kubelet[2762]: I0513 12:37:55.875217 2762 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/c44a3acd-72a7-481d-97bf-bd9ad94cfff7-flexvol-driver-host\") pod \"calico-node-7hsdq\" (UID: \"c44a3acd-72a7-481d-97bf-bd9ad94cfff7\") " pod="calico-system/calico-node-7hsdq" May 13 12:37:55.875263 kubelet[2762]: I0513 12:37:55.875254 2762 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/c44a3acd-72a7-481d-97bf-bd9ad94cfff7-node-certs\") pod \"calico-node-7hsdq\" (UID: \"c44a3acd-72a7-481d-97bf-bd9ad94cfff7\") " pod="calico-system/calico-node-7hsdq" May 13 12:37:55.875300 kubelet[2762]: I0513 12:37:55.875281 2762 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/c44a3acd-72a7-481d-97bf-bd9ad94cfff7-var-run-calico\") pod \"calico-node-7hsdq\" (UID: \"c44a3acd-72a7-481d-97bf-bd9ad94cfff7\") " pod="calico-system/calico-node-7hsdq" May 13 12:37:55.875324 kubelet[2762]: I0513 12:37:55.875305 2762 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pfbpk\" (UniqueName: \"kubernetes.io/projected/c44a3acd-72a7-481d-97bf-bd9ad94cfff7-kube-api-access-pfbpk\") pod \"calico-node-7hsdq\" (UID: \"c44a3acd-72a7-481d-97bf-bd9ad94cfff7\") " pod="calico-system/calico-node-7hsdq" May 13 12:37:55.875344 kubelet[2762]: I0513 12:37:55.875329 2762 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ee78474d-4c1d-4dc3-b21b-567a135db359-tigera-ca-bundle\") pod \"calico-typha-d9bc59ccf-2cm5r\" (UID: \"ee78474d-4c1d-4dc3-b21b-567a135db359\") " pod="calico-system/calico-typha-d9bc59ccf-2cm5r" May 13 12:37:55.965426 kubelet[2762]: I0513 12:37:55.965343 2762 topology_manager.go:215] "Topology Admit Handler" podUID="d3317169-9c1a-4d06-be39-f899c6b6ad94" podNamespace="calico-system" podName="csi-node-driver-sjhqz" May 13 12:37:55.965703 kubelet[2762]: E0513 12:37:55.965596 2762 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-sjhqz" podUID="d3317169-9c1a-4d06-be39-f899c6b6ad94" May 13 12:37:55.983226 kubelet[2762]: E0513 12:37:55.983064 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:37:55.983226 kubelet[2762]: W0513 12:37:55.983091 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:37:55.983226 kubelet[2762]: E0513 12:37:55.983120 2762 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:37:55.986109 kubelet[2762]: E0513 12:37:55.986088 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:37:55.986224 kubelet[2762]: W0513 12:37:55.986209 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:37:55.986329 kubelet[2762]: E0513 12:37:55.986300 2762 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:37:55.986582 kubelet[2762]: E0513 12:37:55.986560 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:37:55.986647 kubelet[2762]: W0513 12:37:55.986636 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:37:55.988448 kubelet[2762]: E0513 12:37:55.988370 2762 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:37:55.990853 kubelet[2762]: E0513 12:37:55.990741 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:37:55.990853 kubelet[2762]: W0513 12:37:55.990760 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:37:55.991738 kubelet[2762]: E0513 12:37:55.991710 2762 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:37:55.991844 kubelet[2762]: E0513 12:37:55.991829 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:37:55.992063 kubelet[2762]: W0513 12:37:55.992014 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:37:55.992385 kubelet[2762]: E0513 12:37:55.992277 2762 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:37:55.993134 kubelet[2762]: E0513 12:37:55.993040 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:37:55.993134 kubelet[2762]: W0513 12:37:55.993055 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:37:55.993134 kubelet[2762]: E0513 12:37:55.993100 2762 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:37:55.993301 kubelet[2762]: E0513 12:37:55.993287 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:37:55.993354 kubelet[2762]: W0513 12:37:55.993343 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:37:55.993720 kubelet[2762]: E0513 12:37:55.993644 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:37:55.994092 kubelet[2762]: W0513 12:37:55.994072 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:37:55.994330 kubelet[2762]: E0513 12:37:55.993694 2762 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:37:55.994390 kubelet[2762]: E0513 12:37:55.994330 2762 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:37:55.995942 kubelet[2762]: E0513 12:37:55.994463 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:37:55.996054 kubelet[2762]: W0513 12:37:55.996036 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:37:55.996243 kubelet[2762]: E0513 12:37:55.996210 2762 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:37:55.996593 kubelet[2762]: E0513 12:37:55.996422 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:37:55.996593 kubelet[2762]: W0513 12:37:55.996436 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:37:55.996593 kubelet[2762]: E0513 12:37:55.996517 2762 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:37:55.996793 kubelet[2762]: E0513 12:37:55.996779 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:37:55.996842 kubelet[2762]: W0513 12:37:55.996832 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:37:55.996962 kubelet[2762]: E0513 12:37:55.996937 2762 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:37:55.997186 kubelet[2762]: E0513 12:37:55.997136 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:37:55.997276 kubelet[2762]: W0513 12:37:55.997262 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:37:55.997373 kubelet[2762]: E0513 12:37:55.997349 2762 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:37:55.998250 kubelet[2762]: E0513 12:37:55.998216 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:37:55.998600 kubelet[2762]: W0513 12:37:55.998579 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:37:55.999914 kubelet[2762]: E0513 12:37:55.998708 2762 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:37:56.000353 kubelet[2762]: E0513 12:37:56.000163 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:37:56.000353 kubelet[2762]: W0513 12:37:56.000179 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:37:56.000353 kubelet[2762]: E0513 12:37:56.000225 2762 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:37:56.000539 kubelet[2762]: E0513 12:37:56.000525 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:37:56.000591 kubelet[2762]: W0513 12:37:56.000580 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:37:56.000726 kubelet[2762]: E0513 12:37:56.000695 2762 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:37:56.000813 kubelet[2762]: E0513 12:37:56.000802 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:37:56.000858 kubelet[2762]: W0513 12:37:56.000849 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:37:56.000956 kubelet[2762]: E0513 12:37:56.000932 2762 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:37:56.001126 kubelet[2762]: E0513 12:37:56.001113 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:37:56.001194 kubelet[2762]: W0513 12:37:56.001182 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:37:56.001342 kubelet[2762]: E0513 12:37:56.001283 2762 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:37:56.001611 kubelet[2762]: E0513 12:37:56.001591 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:37:56.001678 kubelet[2762]: W0513 12:37:56.001667 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:37:56.001786 kubelet[2762]: E0513 12:37:56.001760 2762 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:37:56.002091 kubelet[2762]: E0513 12:37:56.002072 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:37:56.002219 kubelet[2762]: W0513 12:37:56.002196 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:37:56.002298 kubelet[2762]: E0513 12:37:56.002283 2762 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:37:56.002720 kubelet[2762]: E0513 12:37:56.002673 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:37:56.002720 kubelet[2762]: W0513 12:37:56.002690 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:37:56.002720 kubelet[2762]: E0513 12:37:56.002709 2762 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:37:56.003264 kubelet[2762]: E0513 12:37:56.003238 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:37:56.003264 kubelet[2762]: W0513 12:37:56.003255 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:37:56.003343 kubelet[2762]: E0513 12:37:56.003273 2762 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:37:56.005031 kubelet[2762]: E0513 12:37:56.005003 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:37:56.005031 kubelet[2762]: W0513 12:37:56.005024 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:37:56.005125 kubelet[2762]: E0513 12:37:56.005043 2762 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:37:56.005280 kubelet[2762]: E0513 12:37:56.005245 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:37:56.005280 kubelet[2762]: W0513 12:37:56.005274 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:37:56.005350 kubelet[2762]: E0513 12:37:56.005286 2762 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:37:56.009561 kubelet[2762]: E0513 12:37:56.009520 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:37:56.009561 kubelet[2762]: W0513 12:37:56.009538 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:37:56.009561 kubelet[2762]: E0513 12:37:56.009553 2762 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:37:56.016496 kubelet[2762]: E0513 12:37:56.016438 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:37:56.016496 kubelet[2762]: W0513 12:37:56.016453 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:37:56.016496 kubelet[2762]: E0513 12:37:56.016466 2762 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:37:56.063468 kubelet[2762]: E0513 12:37:56.063280 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:37:56.063468 kubelet[2762]: W0513 12:37:56.063302 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:37:56.063468 kubelet[2762]: E0513 12:37:56.063318 2762 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:37:56.063765 kubelet[2762]: E0513 12:37:56.063658 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:37:56.063765 kubelet[2762]: W0513 12:37:56.063671 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:37:56.063765 kubelet[2762]: E0513 12:37:56.063682 2762 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:37:56.064706 kubelet[2762]: E0513 12:37:56.064690 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:37:56.064865 kubelet[2762]: W0513 12:37:56.064753 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:37:56.064865 kubelet[2762]: E0513 12:37:56.064769 2762 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:37:56.065117 kubelet[2762]: E0513 12:37:56.065102 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:37:56.065348 kubelet[2762]: W0513 12:37:56.065310 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:37:56.065601 kubelet[2762]: E0513 12:37:56.065453 2762 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:37:56.065753 kubelet[2762]: E0513 12:37:56.065737 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:37:56.065858 kubelet[2762]: W0513 12:37:56.065844 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:37:56.066122 kubelet[2762]: E0513 12:37:56.065923 2762 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:37:56.066334 kubelet[2762]: E0513 12:37:56.066319 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:37:56.066445 kubelet[2762]: W0513 12:37:56.066431 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:37:56.066509 kubelet[2762]: E0513 12:37:56.066498 2762 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:37:56.066780 kubelet[2762]: E0513 12:37:56.066691 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:37:56.066780 kubelet[2762]: W0513 12:37:56.066702 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:37:56.066780 kubelet[2762]: E0513 12:37:56.066711 2762 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:37:56.066937 kubelet[2762]: E0513 12:37:56.066924 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:37:56.066987 kubelet[2762]: W0513 12:37:56.066977 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:37:56.067133 kubelet[2762]: E0513 12:37:56.067042 2762 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:37:56.067243 kubelet[2762]: E0513 12:37:56.067229 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:37:56.067359 kubelet[2762]: W0513 12:37:56.067345 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:37:56.067411 kubelet[2762]: E0513 12:37:56.067400 2762 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:37:56.067600 kubelet[2762]: E0513 12:37:56.067587 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:37:56.067658 kubelet[2762]: W0513 12:37:56.067649 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:37:56.067718 kubelet[2762]: E0513 12:37:56.067706 2762 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:37:56.067997 kubelet[2762]: E0513 12:37:56.067905 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:37:56.067997 kubelet[2762]: W0513 12:37:56.067916 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:37:56.067997 kubelet[2762]: E0513 12:37:56.067926 2762 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:37:56.068147 kubelet[2762]: E0513 12:37:56.068135 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:37:56.068195 kubelet[2762]: W0513 12:37:56.068185 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:37:56.068268 kubelet[2762]: E0513 12:37:56.068256 2762 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:37:56.068538 kubelet[2762]: E0513 12:37:56.068452 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:37:56.068538 kubelet[2762]: W0513 12:37:56.068465 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:37:56.068538 kubelet[2762]: E0513 12:37:56.068474 2762 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:37:56.068687 kubelet[2762]: E0513 12:37:56.068675 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:37:56.068735 kubelet[2762]: W0513 12:37:56.068725 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:37:56.068795 kubelet[2762]: E0513 12:37:56.068783 2762 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:37:56.068975 kubelet[2762]: E0513 12:37:56.068962 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:37:56.069118 kubelet[2762]: W0513 12:37:56.069024 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:37:56.069118 kubelet[2762]: E0513 12:37:56.069038 2762 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:37:56.069235 kubelet[2762]: E0513 12:37:56.069223 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:37:56.069282 kubelet[2762]: W0513 12:37:56.069273 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:37:56.069343 kubelet[2762]: E0513 12:37:56.069332 2762 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:37:56.069541 kubelet[2762]: E0513 12:37:56.069529 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:37:56.069602 kubelet[2762]: W0513 12:37:56.069591 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:37:56.069660 kubelet[2762]: E0513 12:37:56.069649 2762 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:37:56.069944 kubelet[2762]: E0513 12:37:56.069834 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:37:56.069944 kubelet[2762]: W0513 12:37:56.069844 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:37:56.069944 kubelet[2762]: E0513 12:37:56.069853 2762 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:37:56.070101 kubelet[2762]: E0513 12:37:56.070089 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:37:56.070250 kubelet[2762]: W0513 12:37:56.070146 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:37:56.070250 kubelet[2762]: E0513 12:37:56.070161 2762 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:37:56.070360 kubelet[2762]: E0513 12:37:56.070349 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:37:56.070415 kubelet[2762]: W0513 12:37:56.070405 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:37:56.070474 kubelet[2762]: E0513 12:37:56.070463 2762 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:37:56.076599 kubelet[2762]: E0513 12:37:56.076583 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:37:56.076599 kubelet[2762]: W0513 12:37:56.076597 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:37:56.076687 kubelet[2762]: E0513 12:37:56.076609 2762 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:37:56.076687 kubelet[2762]: I0513 12:37:56.076633 2762 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/d3317169-9c1a-4d06-be39-f899c6b6ad94-socket-dir\") pod \"csi-node-driver-sjhqz\" (UID: \"d3317169-9c1a-4d06-be39-f899c6b6ad94\") " pod="calico-system/csi-node-driver-sjhqz" May 13 12:37:56.076812 kubelet[2762]: E0513 12:37:56.076799 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:37:56.076852 kubelet[2762]: W0513 12:37:56.076812 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:37:56.076852 kubelet[2762]: E0513 12:37:56.076826 2762 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:37:56.076852 kubelet[2762]: I0513 12:37:56.076840 2762 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d3317169-9c1a-4d06-be39-f899c6b6ad94-kubelet-dir\") pod \"csi-node-driver-sjhqz\" (UID: \"d3317169-9c1a-4d06-be39-f899c6b6ad94\") " pod="calico-system/csi-node-driver-sjhqz" May 13 12:37:56.077738 kubelet[2762]: E0513 12:37:56.077721 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:37:56.077783 kubelet[2762]: W0513 12:37:56.077736 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:37:56.077783 kubelet[2762]: E0513 12:37:56.077758 2762 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:37:56.077783 kubelet[2762]: I0513 12:37:56.077776 2762 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/d3317169-9c1a-4d06-be39-f899c6b6ad94-registration-dir\") pod \"csi-node-driver-sjhqz\" (UID: \"d3317169-9c1a-4d06-be39-f899c6b6ad94\") " pod="calico-system/csi-node-driver-sjhqz" May 13 12:37:56.077961 kubelet[2762]: E0513 12:37:56.077939 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:37:56.077961 kubelet[2762]: W0513 12:37:56.077952 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:37:56.078028 kubelet[2762]: E0513 12:37:56.077988 2762 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:37:56.078028 kubelet[2762]: I0513 12:37:56.078021 2762 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/d3317169-9c1a-4d06-be39-f899c6b6ad94-varrun\") pod \"csi-node-driver-sjhqz\" (UID: \"d3317169-9c1a-4d06-be39-f899c6b6ad94\") " pod="calico-system/csi-node-driver-sjhqz" May 13 12:37:56.078128 kubelet[2762]: E0513 12:37:56.078114 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:37:56.078128 kubelet[2762]: W0513 12:37:56.078125 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:37:56.078177 kubelet[2762]: E0513 12:37:56.078151 2762 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:37:56.078271 kubelet[2762]: E0513 12:37:56.078259 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:37:56.078271 kubelet[2762]: W0513 12:37:56.078270 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:37:56.078315 kubelet[2762]: E0513 12:37:56.078291 2762 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:37:56.078404 kubelet[2762]: E0513 12:37:56.078393 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:37:56.078404 kubelet[2762]: W0513 12:37:56.078403 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:37:56.078442 kubelet[2762]: E0513 12:37:56.078415 2762 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:37:56.078547 kubelet[2762]: E0513 12:37:56.078538 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:37:56.078568 kubelet[2762]: W0513 12:37:56.078547 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:37:56.078568 kubelet[2762]: E0513 12:37:56.078558 2762 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:37:56.078615 kubelet[2762]: I0513 12:37:56.078577 2762 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzrmc\" (UniqueName: \"kubernetes.io/projected/d3317169-9c1a-4d06-be39-f899c6b6ad94-kube-api-access-hzrmc\") pod \"csi-node-driver-sjhqz\" (UID: \"d3317169-9c1a-4d06-be39-f899c6b6ad94\") " pod="calico-system/csi-node-driver-sjhqz" May 13 12:37:56.078708 kubelet[2762]: E0513 12:37:56.078696 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:37:56.078732 kubelet[2762]: W0513 12:37:56.078709 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:37:56.078732 kubelet[2762]: E0513 12:37:56.078722 2762 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:37:56.078846 kubelet[2762]: E0513 12:37:56.078837 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:37:56.078868 kubelet[2762]: W0513 12:37:56.078846 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:37:56.078868 kubelet[2762]: E0513 12:37:56.078853 2762 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:37:56.079028 kubelet[2762]: E0513 12:37:56.079016 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:37:56.079028 kubelet[2762]: W0513 12:37:56.079026 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:37:56.079074 kubelet[2762]: E0513 12:37:56.079038 2762 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:37:56.079172 kubelet[2762]: E0513 12:37:56.079161 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:37:56.079172 kubelet[2762]: W0513 12:37:56.079170 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:37:56.079223 kubelet[2762]: E0513 12:37:56.079177 2762 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:37:56.079378 kubelet[2762]: E0513 12:37:56.079361 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:37:56.079405 kubelet[2762]: W0513 12:37:56.079378 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:37:56.079405 kubelet[2762]: E0513 12:37:56.079390 2762 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:37:56.079587 kubelet[2762]: E0513 12:37:56.079570 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:37:56.079622 kubelet[2762]: W0513 12:37:56.079586 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:37:56.079622 kubelet[2762]: E0513 12:37:56.079597 2762 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:37:56.079779 kubelet[2762]: E0513 12:37:56.079767 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:37:56.079779 kubelet[2762]: W0513 12:37:56.079779 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:37:56.079829 kubelet[2762]: E0513 12:37:56.079788 2762 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:37:56.127905 containerd[1515]: time="2025-05-13T12:37:56.127857271Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-d9bc59ccf-2cm5r,Uid:ee78474d-4c1d-4dc3-b21b-567a135db359,Namespace:calico-system,Attempt:0,}" May 13 12:37:56.160368 containerd[1515]: time="2025-05-13T12:37:56.160318127Z" level=info msg="connecting to shim a03c6a64bfa81aff3f358ec1243b5d8cc6e3a40de5bb4a6f1b336bdd2cf37ed0" address="unix:///run/containerd/s/0ee4cad439b3604ae450c8ae3695e552bb45db6654d440e01be5c33529ab16c3" namespace=k8s.io protocol=ttrpc version=3 May 13 12:37:56.162779 containerd[1515]: time="2025-05-13T12:37:56.162725640Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-7hsdq,Uid:c44a3acd-72a7-481d-97bf-bd9ad94cfff7,Namespace:calico-system,Attempt:0,}" May 13 12:37:56.178609 containerd[1515]: time="2025-05-13T12:37:56.178572866Z" level=info msg="connecting to shim bd42eb710f59949028fca2588f050766215f73940d3134c8524eeaa0d51606d8" address="unix:///run/containerd/s/501bcc4a03278bf95c7a3b66ba1fe74e36fdb5e0de0b7fa5ecb4b76c4b336936" namespace=k8s.io protocol=ttrpc version=3 May 13 12:37:56.180559 kubelet[2762]: E0513 12:37:56.180511 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:37:56.180559 kubelet[2762]: W0513 12:37:56.180530 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:37:56.180812 kubelet[2762]: E0513 12:37:56.180679 2762 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:37:56.180954 kubelet[2762]: E0513 12:37:56.180919 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:37:56.180954 kubelet[2762]: W0513 12:37:56.180931 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:37:56.181154 kubelet[2762]: E0513 12:37:56.181046 2762 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:37:56.181262 kubelet[2762]: E0513 12:37:56.181249 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:37:56.181366 kubelet[2762]: W0513 12:37:56.181319 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:37:56.181585 kubelet[2762]: E0513 12:37:56.181522 2762 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:37:56.182088 kubelet[2762]: E0513 12:37:56.181956 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:37:56.182088 kubelet[2762]: W0513 12:37:56.181975 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:37:56.182088 kubelet[2762]: E0513 12:37:56.181994 2762 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:37:56.182058 systemd[1]: Started cri-containerd-a03c6a64bfa81aff3f358ec1243b5d8cc6e3a40de5bb4a6f1b336bdd2cf37ed0.scope - libcontainer container a03c6a64bfa81aff3f358ec1243b5d8cc6e3a40de5bb4a6f1b336bdd2cf37ed0. May 13 12:37:56.184836 kubelet[2762]: E0513 12:37:56.184810 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:37:56.184836 kubelet[2762]: W0513 12:37:56.184830 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:37:56.184933 kubelet[2762]: E0513 12:37:56.184849 2762 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:37:56.185341 kubelet[2762]: E0513 12:37:56.185324 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:37:56.185442 kubelet[2762]: W0513 12:37:56.185357 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:37:56.185442 kubelet[2762]: E0513 12:37:56.185395 2762 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:37:56.185541 kubelet[2762]: E0513 12:37:56.185528 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:37:56.185541 kubelet[2762]: W0513 12:37:56.185539 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:37:56.185616 kubelet[2762]: E0513 12:37:56.185594 2762 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:37:56.185724 kubelet[2762]: E0513 12:37:56.185710 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:37:56.185802 kubelet[2762]: W0513 12:37:56.185724 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:37:56.185802 kubelet[2762]: E0513 12:37:56.185773 2762 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:37:56.185931 kubelet[2762]: E0513 12:37:56.185918 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:37:56.185931 kubelet[2762]: W0513 12:37:56.185930 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:37:56.185999 kubelet[2762]: E0513 12:37:56.185960 2762 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:37:56.186087 kubelet[2762]: E0513 12:37:56.186076 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:37:56.186087 kubelet[2762]: W0513 12:37:56.186086 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:37:56.186145 kubelet[2762]: E0513 12:37:56.186102 2762 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:37:56.186370 kubelet[2762]: E0513 12:37:56.186348 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:37:56.186370 kubelet[2762]: W0513 12:37:56.186360 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:37:56.186423 kubelet[2762]: E0513 12:37:56.186377 2762 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:37:56.186591 kubelet[2762]: E0513 12:37:56.186580 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:37:56.186591 kubelet[2762]: W0513 12:37:56.186590 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:37:56.186652 kubelet[2762]: E0513 12:37:56.186609 2762 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:37:56.186875 kubelet[2762]: E0513 12:37:56.186860 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:37:56.186926 kubelet[2762]: W0513 12:37:56.186876 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:37:56.187094 kubelet[2762]: E0513 12:37:56.187063 2762 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:37:56.187423 kubelet[2762]: E0513 12:37:56.187301 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:37:56.187423 kubelet[2762]: W0513 12:37:56.187317 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:37:56.187423 kubelet[2762]: E0513 12:37:56.187334 2762 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:37:56.187606 kubelet[2762]: E0513 12:37:56.187594 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:37:56.187659 kubelet[2762]: W0513 12:37:56.187648 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:37:56.187758 kubelet[2762]: E0513 12:37:56.187727 2762 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:37:56.188031 kubelet[2762]: E0513 12:37:56.187945 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:37:56.188031 kubelet[2762]: W0513 12:37:56.187957 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:37:56.188031 kubelet[2762]: E0513 12:37:56.187995 2762 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:37:56.188312 kubelet[2762]: E0513 12:37:56.188253 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:37:56.188312 kubelet[2762]: W0513 12:37:56.188267 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:37:56.188312 kubelet[2762]: E0513 12:37:56.188283 2762 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:37:56.188958 kubelet[2762]: E0513 12:37:56.188944 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:37:56.189142 kubelet[2762]: W0513 12:37:56.189004 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:37:56.189142 kubelet[2762]: E0513 12:37:56.189031 2762 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:37:56.189324 kubelet[2762]: E0513 12:37:56.189298 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:37:56.189483 kubelet[2762]: W0513 12:37:56.189367 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:37:56.189483 kubelet[2762]: E0513 12:37:56.189389 2762 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:37:56.189625 kubelet[2762]: E0513 12:37:56.189612 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:37:56.189675 kubelet[2762]: W0513 12:37:56.189665 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:37:56.189736 kubelet[2762]: E0513 12:37:56.189725 2762 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:37:56.190000 kubelet[2762]: E0513 12:37:56.189970 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:37:56.190000 kubelet[2762]: W0513 12:37:56.189998 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:37:56.190096 kubelet[2762]: E0513 12:37:56.190051 2762 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:37:56.190201 kubelet[2762]: E0513 12:37:56.190190 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:37:56.190201 kubelet[2762]: W0513 12:37:56.190201 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:37:56.190398 kubelet[2762]: E0513 12:37:56.190214 2762 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:37:56.190478 kubelet[2762]: E0513 12:37:56.190433 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:37:56.190478 kubelet[2762]: W0513 12:37:56.190470 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:37:56.190478 kubelet[2762]: E0513 12:37:56.190490 2762 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:37:56.190731 kubelet[2762]: E0513 12:37:56.190718 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:37:56.190912 kubelet[2762]: W0513 12:37:56.190771 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:37:56.190912 kubelet[2762]: E0513 12:37:56.190786 2762 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:37:56.191067 kubelet[2762]: E0513 12:37:56.191053 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:37:56.191123 kubelet[2762]: W0513 12:37:56.191112 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:37:56.191309 kubelet[2762]: E0513 12:37:56.191165 2762 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:37:56.201360 kubelet[2762]: E0513 12:37:56.201338 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:37:56.201360 kubelet[2762]: W0513 12:37:56.201355 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:37:56.201452 kubelet[2762]: E0513 12:37:56.201370 2762 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:37:56.206058 systemd[1]: Started cri-containerd-bd42eb710f59949028fca2588f050766215f73940d3134c8524eeaa0d51606d8.scope - libcontainer container bd42eb710f59949028fca2588f050766215f73940d3134c8524eeaa0d51606d8. May 13 12:37:56.235465 containerd[1515]: time="2025-05-13T12:37:56.235415059Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-7hsdq,Uid:c44a3acd-72a7-481d-97bf-bd9ad94cfff7,Namespace:calico-system,Attempt:0,} returns sandbox id \"bd42eb710f59949028fca2588f050766215f73940d3134c8524eeaa0d51606d8\"" May 13 12:37:56.237675 containerd[1515]: time="2025-05-13T12:37:56.237580052Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\"" May 13 12:37:56.244505 containerd[1515]: time="2025-05-13T12:37:56.244467456Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-d9bc59ccf-2cm5r,Uid:ee78474d-4c1d-4dc3-b21b-567a135db359,Namespace:calico-system,Attempt:0,} returns sandbox id \"a03c6a64bfa81aff3f358ec1243b5d8cc6e3a40de5bb4a6f1b336bdd2cf37ed0\"" May 13 12:37:57.730942 kubelet[2762]: E0513 12:37:57.730858 2762 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-sjhqz" podUID="d3317169-9c1a-4d06-be39-f899c6b6ad94" May 13 12:37:58.870694 containerd[1515]: time="2025-05-13T12:37:58.870386449Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:37:58.871940 containerd[1515]: time="2025-05-13T12:37:58.871869941Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3: active requests=0, bytes read=5122903" May 13 12:37:58.872747 containerd[1515]: time="2025-05-13T12:37:58.872707381Z" level=info msg="ImageCreate event name:\"sha256:dd8e710a588cc6f5834c4d84f7e12458efae593d3dfe527ca9e757c89239ecb8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:37:58.874847 containerd[1515]: time="2025-05-13T12:37:58.874815724Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:37:58.875826 containerd[1515]: time="2025-05-13T12:37:58.875774541Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" with image id \"sha256:dd8e710a588cc6f5834c4d84f7e12458efae593d3dfe527ca9e757c89239ecb8\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\", size \"6492045\" in 2.638096193s" May 13 12:37:58.875826 containerd[1515]: time="2025-05-13T12:37:58.875809946Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" returns image reference \"sha256:dd8e710a588cc6f5834c4d84f7e12458efae593d3dfe527ca9e757c89239ecb8\"" May 13 12:37:58.877410 containerd[1515]: time="2025-05-13T12:37:58.877199025Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\"" May 13 12:37:58.878236 containerd[1515]: time="2025-05-13T12:37:58.878206890Z" level=info msg="CreateContainer within sandbox \"bd42eb710f59949028fca2588f050766215f73940d3134c8524eeaa0d51606d8\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 13 12:37:58.887233 containerd[1515]: time="2025-05-13T12:37:58.886146348Z" level=info msg="Container d01378b50da20a0df1ad864c39de1b5ad24d0b0404586a444bdaea71621e3b4e: CDI devices from CRI Config.CDIDevices: []" May 13 12:37:58.888666 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount982271827.mount: Deactivated successfully. May 13 12:37:58.892635 containerd[1515]: time="2025-05-13T12:37:58.892591193Z" level=info msg="CreateContainer within sandbox \"bd42eb710f59949028fca2588f050766215f73940d3134c8524eeaa0d51606d8\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"d01378b50da20a0df1ad864c39de1b5ad24d0b0404586a444bdaea71621e3b4e\"" May 13 12:37:58.893056 containerd[1515]: time="2025-05-13T12:37:58.893024895Z" level=info msg="StartContainer for \"d01378b50da20a0df1ad864c39de1b5ad24d0b0404586a444bdaea71621e3b4e\"" May 13 12:37:58.894569 containerd[1515]: time="2025-05-13T12:37:58.894542752Z" level=info msg="connecting to shim d01378b50da20a0df1ad864c39de1b5ad24d0b0404586a444bdaea71621e3b4e" address="unix:///run/containerd/s/501bcc4a03278bf95c7a3b66ba1fe74e36fdb5e0de0b7fa5ecb4b76c4b336936" protocol=ttrpc version=3 May 13 12:37:58.914053 systemd[1]: Started cri-containerd-d01378b50da20a0df1ad864c39de1b5ad24d0b0404586a444bdaea71621e3b4e.scope - libcontainer container d01378b50da20a0df1ad864c39de1b5ad24d0b0404586a444bdaea71621e3b4e. May 13 12:37:58.988852 containerd[1515]: time="2025-05-13T12:37:58.987445514Z" level=info msg="StartContainer for \"d01378b50da20a0df1ad864c39de1b5ad24d0b0404586a444bdaea71621e3b4e\" returns successfully" May 13 12:37:59.008110 systemd[1]: cri-containerd-d01378b50da20a0df1ad864c39de1b5ad24d0b0404586a444bdaea71621e3b4e.scope: Deactivated successfully. May 13 12:37:59.008967 systemd[1]: cri-containerd-d01378b50da20a0df1ad864c39de1b5ad24d0b0404586a444bdaea71621e3b4e.scope: Consumed 49ms CPU time, 8.1M memory peak, 6.2M written to disk. May 13 12:37:59.031905 containerd[1515]: time="2025-05-13T12:37:59.031847684Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d01378b50da20a0df1ad864c39de1b5ad24d0b0404586a444bdaea71621e3b4e\" id:\"d01378b50da20a0df1ad864c39de1b5ad24d0b0404586a444bdaea71621e3b4e\" pid:3354 exited_at:{seconds:1747139879 nanos:22549874}" May 13 12:37:59.032699 containerd[1515]: time="2025-05-13T12:37:59.032649871Z" level=info msg="received exit event container_id:\"d01378b50da20a0df1ad864c39de1b5ad24d0b0404586a444bdaea71621e3b4e\" id:\"d01378b50da20a0df1ad864c39de1b5ad24d0b0404586a444bdaea71621e3b4e\" pid:3354 exited_at:{seconds:1747139879 nanos:22549874}" May 13 12:37:59.060831 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d01378b50da20a0df1ad864c39de1b5ad24d0b0404586a444bdaea71621e3b4e-rootfs.mount: Deactivated successfully. May 13 12:37:59.730439 kubelet[2762]: E0513 12:37:59.730364 2762 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-sjhqz" podUID="d3317169-9c1a-4d06-be39-f899c6b6ad94" May 13 12:38:01.731082 kubelet[2762]: E0513 12:38:01.731011 2762 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-sjhqz" podUID="d3317169-9c1a-4d06-be39-f899c6b6ad94" May 13 12:38:03.020176 systemd[1]: Started sshd@7-10.0.0.46:22-10.0.0.1:46096.service - OpenSSH per-connection server daemon (10.0.0.1:46096). May 13 12:38:03.078503 sshd[3395]: Accepted publickey for core from 10.0.0.1 port 46096 ssh2: RSA SHA256:HV7SwMkgpUcGbG5PTBCNGAhaEvexdMAt2yN/TIbGAFk May 13 12:38:03.080475 sshd-session[3395]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:38:03.084402 systemd-logind[1485]: New session 8 of user core. May 13 12:38:03.094054 systemd[1]: Started session-8.scope - Session 8 of User core. May 13 12:38:03.212029 sshd[3397]: Connection closed by 10.0.0.1 port 46096 May 13 12:38:03.212324 sshd-session[3395]: pam_unix(sshd:session): session closed for user core May 13 12:38:03.215876 systemd[1]: sshd@7-10.0.0.46:22-10.0.0.1:46096.service: Deactivated successfully. May 13 12:38:03.217546 systemd[1]: session-8.scope: Deactivated successfully. May 13 12:38:03.218519 systemd-logind[1485]: Session 8 logged out. Waiting for processes to exit. May 13 12:38:03.219849 systemd-logind[1485]: Removed session 8. May 13 12:38:03.730772 kubelet[2762]: E0513 12:38:03.730701 2762 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-sjhqz" podUID="d3317169-9c1a-4d06-be39-f899c6b6ad94" May 13 12:38:05.638446 containerd[1515]: time="2025-05-13T12:38:05.638383718Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:38:05.638971 containerd[1515]: time="2025-05-13T12:38:05.638935008Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.3: active requests=0, bytes read=28370571" May 13 12:38:05.639808 containerd[1515]: time="2025-05-13T12:38:05.639777805Z" level=info msg="ImageCreate event name:\"sha256:26e730979a07ea7452715da6ac48076016018bc982c06ebd32d5e095f42d3d54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:38:05.641383 containerd[1515]: time="2025-05-13T12:38:05.641346028Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:38:05.642069 containerd[1515]: time="2025-05-13T12:38:05.641995207Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.3\" with image id \"sha256:26e730979a07ea7452715da6ac48076016018bc982c06ebd32d5e095f42d3d54\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\", size \"29739745\" in 6.764743775s" May 13 12:38:05.642069 containerd[1515]: time="2025-05-13T12:38:05.642024810Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\" returns image reference \"sha256:26e730979a07ea7452715da6ac48076016018bc982c06ebd32d5e095f42d3d54\"" May 13 12:38:05.643126 containerd[1515]: time="2025-05-13T12:38:05.642994899Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\"" May 13 12:38:05.661404 containerd[1515]: time="2025-05-13T12:38:05.661359735Z" level=info msg="CreateContainer within sandbox \"a03c6a64bfa81aff3f358ec1243b5d8cc6e3a40de5bb4a6f1b336bdd2cf37ed0\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" May 13 12:38:05.669428 containerd[1515]: time="2025-05-13T12:38:05.669321102Z" level=info msg="Container 838d68c7cfd745cc0a8bf780e3691362e0a73cfd87d53e9c9c66da8c27380d86: CDI devices from CRI Config.CDIDevices: []" May 13 12:38:05.670424 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2660416170.mount: Deactivated successfully. May 13 12:38:05.678685 containerd[1515]: time="2025-05-13T12:38:05.678630551Z" level=info msg="CreateContainer within sandbox \"a03c6a64bfa81aff3f358ec1243b5d8cc6e3a40de5bb4a6f1b336bdd2cf37ed0\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"838d68c7cfd745cc0a8bf780e3691362e0a73cfd87d53e9c9c66da8c27380d86\"" May 13 12:38:05.679777 containerd[1515]: time="2025-05-13T12:38:05.679741053Z" level=info msg="StartContainer for \"838d68c7cfd745cc0a8bf780e3691362e0a73cfd87d53e9c9c66da8c27380d86\"" May 13 12:38:05.680936 containerd[1515]: time="2025-05-13T12:38:05.680868996Z" level=info msg="connecting to shim 838d68c7cfd745cc0a8bf780e3691362e0a73cfd87d53e9c9c66da8c27380d86" address="unix:///run/containerd/s/0ee4cad439b3604ae450c8ae3695e552bb45db6654d440e01be5c33529ab16c3" protocol=ttrpc version=3 May 13 12:38:05.697068 systemd[1]: Started cri-containerd-838d68c7cfd745cc0a8bf780e3691362e0a73cfd87d53e9c9c66da8c27380d86.scope - libcontainer container 838d68c7cfd745cc0a8bf780e3691362e0a73cfd87d53e9c9c66da8c27380d86. May 13 12:38:05.731373 kubelet[2762]: E0513 12:38:05.731321 2762 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-sjhqz" podUID="d3317169-9c1a-4d06-be39-f899c6b6ad94" May 13 12:38:05.736121 containerd[1515]: time="2025-05-13T12:38:05.736084315Z" level=info msg="StartContainer for \"838d68c7cfd745cc0a8bf780e3691362e0a73cfd87d53e9c9c66da8c27380d86\" returns successfully" May 13 12:38:05.823549 kubelet[2762]: I0513 12:38:05.823233 2762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-d9bc59ccf-2cm5r" podStartSLOduration=1.425912255 podStartE2EDuration="10.823218109s" podCreationTimestamp="2025-05-13 12:37:55 +0000 UTC" firstStartedPulling="2025-05-13 12:37:56.245538951 +0000 UTC m=+23.591369357" lastFinishedPulling="2025-05-13 12:38:05.642844725 +0000 UTC m=+32.988675211" observedRunningTime="2025-05-13 12:38:05.822899679 +0000 UTC m=+33.168730125" watchObservedRunningTime="2025-05-13 12:38:05.823218109 +0000 UTC m=+33.169048555" May 13 12:38:07.730686 kubelet[2762]: E0513 12:38:07.730635 2762 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-sjhqz" podUID="d3317169-9c1a-4d06-be39-f899c6b6ad94" May 13 12:38:08.227990 systemd[1]: Started sshd@8-10.0.0.46:22-10.0.0.1:46112.service - OpenSSH per-connection server daemon (10.0.0.1:46112). May 13 12:38:08.286316 sshd[3469]: Accepted publickey for core from 10.0.0.1 port 46112 ssh2: RSA SHA256:HV7SwMkgpUcGbG5PTBCNGAhaEvexdMAt2yN/TIbGAFk May 13 12:38:08.287423 sshd-session[3469]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:38:08.291388 systemd-logind[1485]: New session 9 of user core. May 13 12:38:08.301007 systemd[1]: Started session-9.scope - Session 9 of User core. May 13 12:38:08.411001 sshd[3471]: Connection closed by 10.0.0.1 port 46112 May 13 12:38:08.411314 sshd-session[3469]: pam_unix(sshd:session): session closed for user core May 13 12:38:08.414986 systemd-logind[1485]: Session 9 logged out. Waiting for processes to exit. May 13 12:38:08.415235 systemd[1]: sshd@8-10.0.0.46:22-10.0.0.1:46112.service: Deactivated successfully. May 13 12:38:08.416709 systemd[1]: session-9.scope: Deactivated successfully. May 13 12:38:08.419165 systemd-logind[1485]: Removed session 9. May 13 12:38:09.731333 kubelet[2762]: E0513 12:38:09.731288 2762 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-sjhqz" podUID="d3317169-9c1a-4d06-be39-f899c6b6ad94" May 13 12:38:11.106868 containerd[1515]: time="2025-05-13T12:38:11.106823805Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:38:11.107340 containerd[1515]: time="2025-05-13T12:38:11.107309996Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.3: active requests=0, bytes read=91256270" May 13 12:38:11.107961 containerd[1515]: time="2025-05-13T12:38:11.107934397Z" level=info msg="ImageCreate event name:\"sha256:add6372545fb406bb017769f222d84c50549ce13e3b19f1fbaee3d8a4aaef627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:38:11.111536 containerd[1515]: time="2025-05-13T12:38:11.111470304Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:38:11.112235 containerd[1515]: time="2025-05-13T12:38:11.112199870Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.3\" with image id \"sha256:add6372545fb406bb017769f222d84c50549ce13e3b19f1fbaee3d8a4aaef627\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\", size \"92625452\" in 5.469171649s" May 13 12:38:11.112235 containerd[1515]: time="2025-05-13T12:38:11.112232473Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\" returns image reference \"sha256:add6372545fb406bb017769f222d84c50549ce13e3b19f1fbaee3d8a4aaef627\"" May 13 12:38:11.117917 containerd[1515]: time="2025-05-13T12:38:11.117599497Z" level=info msg="CreateContainer within sandbox \"bd42eb710f59949028fca2588f050766215f73940d3134c8524eeaa0d51606d8\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 13 12:38:11.124614 containerd[1515]: time="2025-05-13T12:38:11.123747132Z" level=info msg="Container 5a654108181300bcd960289a9bf2fcf42e279ef5d46f138d6ef1a13cfcabd998: CDI devices from CRI Config.CDIDevices: []" May 13 12:38:11.131063 containerd[1515]: time="2025-05-13T12:38:11.131003758Z" level=info msg="CreateContainer within sandbox \"bd42eb710f59949028fca2588f050766215f73940d3134c8524eeaa0d51606d8\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"5a654108181300bcd960289a9bf2fcf42e279ef5d46f138d6ef1a13cfcabd998\"" May 13 12:38:11.131581 containerd[1515]: time="2025-05-13T12:38:11.131552273Z" level=info msg="StartContainer for \"5a654108181300bcd960289a9bf2fcf42e279ef5d46f138d6ef1a13cfcabd998\"" May 13 12:38:11.133183 containerd[1515]: time="2025-05-13T12:38:11.133142175Z" level=info msg="connecting to shim 5a654108181300bcd960289a9bf2fcf42e279ef5d46f138d6ef1a13cfcabd998" address="unix:///run/containerd/s/501bcc4a03278bf95c7a3b66ba1fe74e36fdb5e0de0b7fa5ecb4b76c4b336936" protocol=ttrpc version=3 May 13 12:38:11.154053 systemd[1]: Started cri-containerd-5a654108181300bcd960289a9bf2fcf42e279ef5d46f138d6ef1a13cfcabd998.scope - libcontainer container 5a654108181300bcd960289a9bf2fcf42e279ef5d46f138d6ef1a13cfcabd998. May 13 12:38:11.187276 containerd[1515]: time="2025-05-13T12:38:11.187219288Z" level=info msg="StartContainer for \"5a654108181300bcd960289a9bf2fcf42e279ef5d46f138d6ef1a13cfcabd998\" returns successfully" May 13 12:38:11.674171 systemd[1]: cri-containerd-5a654108181300bcd960289a9bf2fcf42e279ef5d46f138d6ef1a13cfcabd998.scope: Deactivated successfully. May 13 12:38:11.675077 systemd[1]: cri-containerd-5a654108181300bcd960289a9bf2fcf42e279ef5d46f138d6ef1a13cfcabd998.scope: Consumed 439ms CPU time, 156.7M memory peak, 4K read from disk, 150.3M written to disk. May 13 12:38:11.684207 containerd[1515]: time="2025-05-13T12:38:11.684163363Z" level=info msg="received exit event container_id:\"5a654108181300bcd960289a9bf2fcf42e279ef5d46f138d6ef1a13cfcabd998\" id:\"5a654108181300bcd960289a9bf2fcf42e279ef5d46f138d6ef1a13cfcabd998\" pid:3505 exited_at:{seconds:1747139891 nanos:683941068}" May 13 12:38:11.684556 containerd[1515]: time="2025-05-13T12:38:11.684370096Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5a654108181300bcd960289a9bf2fcf42e279ef5d46f138d6ef1a13cfcabd998\" id:\"5a654108181300bcd960289a9bf2fcf42e279ef5d46f138d6ef1a13cfcabd998\" pid:3505 exited_at:{seconds:1747139891 nanos:683941068}" May 13 12:38:11.698560 kubelet[2762]: I0513 12:38:11.698514 2762 kubelet_node_status.go:497] "Fast updating node status as it just became ready" May 13 12:38:11.704936 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5a654108181300bcd960289a9bf2fcf42e279ef5d46f138d6ef1a13cfcabd998-rootfs.mount: Deactivated successfully. May 13 12:38:11.746824 kubelet[2762]: I0513 12:38:11.746255 2762 topology_manager.go:215] "Topology Admit Handler" podUID="598e01bb-31c9-4a2f-89fd-2acd7cde631f" podNamespace="kube-system" podName="coredns-7db6d8ff4d-q8qxk" May 13 12:38:11.747839 kubelet[2762]: I0513 12:38:11.747787 2762 topology_manager.go:215] "Topology Admit Handler" podUID="3c236d1b-5eee-40d9-9284-ae0c63951484" podNamespace="calico-system" podName="calico-kube-controllers-6b78c5754f-xv599" May 13 12:38:11.749960 kubelet[2762]: I0513 12:38:11.749316 2762 topology_manager.go:215] "Topology Admit Handler" podUID="1b99018c-e9e6-4d14-a4dc-d064c4bafd96" podNamespace="kube-system" podName="coredns-7db6d8ff4d-kxg5d" May 13 12:38:11.753404 systemd[1]: Created slice kubepods-besteffort-podd3317169_9c1a_4d06_be39_f899c6b6ad94.slice - libcontainer container kubepods-besteffort-podd3317169_9c1a_4d06_be39_f899c6b6ad94.slice. May 13 12:38:11.767953 systemd[1]: Created slice kubepods-burstable-pod598e01bb_31c9_4a2f_89fd_2acd7cde631f.slice - libcontainer container kubepods-burstable-pod598e01bb_31c9_4a2f_89fd_2acd7cde631f.slice. May 13 12:38:11.775203 containerd[1515]: time="2025-05-13T12:38:11.775021518Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-sjhqz,Uid:d3317169-9c1a-4d06-be39-f899c6b6ad94,Namespace:calico-system,Attempt:0,}" May 13 12:38:11.776711 systemd[1]: Created slice kubepods-besteffort-pod3c236d1b_5eee_40d9_9284_ae0c63951484.slice - libcontainer container kubepods-besteffort-pod3c236d1b_5eee_40d9_9284_ae0c63951484.slice. May 13 12:38:11.781529 kubelet[2762]: I0513 12:38:11.781485 2762 topology_manager.go:215] "Topology Admit Handler" podUID="33d6eab6-5559-4c8d-ab4d-98247a898dc8" podNamespace="calico-apiserver" podName="calico-apiserver-59c5b9dc8b-hmthn" May 13 12:38:11.781668 kubelet[2762]: I0513 12:38:11.781651 2762 topology_manager.go:215] "Topology Admit Handler" podUID="77933221-0b99-4630-8fad-a6aac7797690" podNamespace="calico-apiserver" podName="calico-apiserver-59c5b9dc8b-hp2wg" May 13 12:38:11.794689 systemd[1]: Created slice kubepods-burstable-pod1b99018c_e9e6_4d14_a4dc_d064c4bafd96.slice - libcontainer container kubepods-burstable-pod1b99018c_e9e6_4d14_a4dc_d064c4bafd96.slice. May 13 12:38:11.796717 kubelet[2762]: I0513 12:38:11.796691 2762 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3c236d1b-5eee-40d9-9284-ae0c63951484-tigera-ca-bundle\") pod \"calico-kube-controllers-6b78c5754f-xv599\" (UID: \"3c236d1b-5eee-40d9-9284-ae0c63951484\") " pod="calico-system/calico-kube-controllers-6b78c5754f-xv599" May 13 12:38:11.796975 kubelet[2762]: I0513 12:38:11.796954 2762 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sx7fn\" (UniqueName: \"kubernetes.io/projected/3c236d1b-5eee-40d9-9284-ae0c63951484-kube-api-access-sx7fn\") pod \"calico-kube-controllers-6b78c5754f-xv599\" (UID: \"3c236d1b-5eee-40d9-9284-ae0c63951484\") " pod="calico-system/calico-kube-controllers-6b78c5754f-xv599" May 13 12:38:11.797212 kubelet[2762]: I0513 12:38:11.797102 2762 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/598e01bb-31c9-4a2f-89fd-2acd7cde631f-config-volume\") pod \"coredns-7db6d8ff4d-q8qxk\" (UID: \"598e01bb-31c9-4a2f-89fd-2acd7cde631f\") " pod="kube-system/coredns-7db6d8ff4d-q8qxk" May 13 12:38:11.797212 kubelet[2762]: I0513 12:38:11.797127 2762 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49p7d\" (UniqueName: \"kubernetes.io/projected/598e01bb-31c9-4a2f-89fd-2acd7cde631f-kube-api-access-49p7d\") pod \"coredns-7db6d8ff4d-q8qxk\" (UID: \"598e01bb-31c9-4a2f-89fd-2acd7cde631f\") " pod="kube-system/coredns-7db6d8ff4d-q8qxk" May 13 12:38:11.801325 systemd[1]: Created slice kubepods-besteffort-pod33d6eab6_5559_4c8d_ab4d_98247a898dc8.slice - libcontainer container kubepods-besteffort-pod33d6eab6_5559_4c8d_ab4d_98247a898dc8.slice. May 13 12:38:11.820208 systemd[1]: Created slice kubepods-besteffort-pod77933221_0b99_4630_8fad_a6aac7797690.slice - libcontainer container kubepods-besteffort-pod77933221_0b99_4630_8fad_a6aac7797690.slice. May 13 12:38:11.898442 kubelet[2762]: I0513 12:38:11.898395 2762 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1b99018c-e9e6-4d14-a4dc-d064c4bafd96-config-volume\") pod \"coredns-7db6d8ff4d-kxg5d\" (UID: \"1b99018c-e9e6-4d14-a4dc-d064c4bafd96\") " pod="kube-system/coredns-7db6d8ff4d-kxg5d" May 13 12:38:11.898581 kubelet[2762]: I0513 12:38:11.898450 2762 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gtk9d\" (UniqueName: \"kubernetes.io/projected/1b99018c-e9e6-4d14-a4dc-d064c4bafd96-kube-api-access-gtk9d\") pod \"coredns-7db6d8ff4d-kxg5d\" (UID: \"1b99018c-e9e6-4d14-a4dc-d064c4bafd96\") " pod="kube-system/coredns-7db6d8ff4d-kxg5d" May 13 12:38:11.898581 kubelet[2762]: I0513 12:38:11.898474 2762 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/33d6eab6-5559-4c8d-ab4d-98247a898dc8-calico-apiserver-certs\") pod \"calico-apiserver-59c5b9dc8b-hmthn\" (UID: \"33d6eab6-5559-4c8d-ab4d-98247a898dc8\") " pod="calico-apiserver/calico-apiserver-59c5b9dc8b-hmthn" May 13 12:38:11.898581 kubelet[2762]: I0513 12:38:11.898491 2762 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kq9vc\" (UniqueName: \"kubernetes.io/projected/33d6eab6-5559-4c8d-ab4d-98247a898dc8-kube-api-access-kq9vc\") pod \"calico-apiserver-59c5b9dc8b-hmthn\" (UID: \"33d6eab6-5559-4c8d-ab4d-98247a898dc8\") " pod="calico-apiserver/calico-apiserver-59c5b9dc8b-hmthn" May 13 12:38:11.898944 kubelet[2762]: I0513 12:38:11.898602 2762 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4n6f\" (UniqueName: \"kubernetes.io/projected/77933221-0b99-4630-8fad-a6aac7797690-kube-api-access-p4n6f\") pod \"calico-apiserver-59c5b9dc8b-hp2wg\" (UID: \"77933221-0b99-4630-8fad-a6aac7797690\") " pod="calico-apiserver/calico-apiserver-59c5b9dc8b-hp2wg" May 13 12:38:11.898944 kubelet[2762]: I0513 12:38:11.898625 2762 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/77933221-0b99-4630-8fad-a6aac7797690-calico-apiserver-certs\") pod \"calico-apiserver-59c5b9dc8b-hp2wg\" (UID: \"77933221-0b99-4630-8fad-a6aac7797690\") " pod="calico-apiserver/calico-apiserver-59c5b9dc8b-hp2wg" May 13 12:38:12.062693 containerd[1515]: time="2025-05-13T12:38:12.062575796Z" level=error msg="Failed to destroy network for sandbox \"e3b756d41f0a2514c8a7eb922fa10dc599849369123b8b04c541a356ac8141c2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 12:38:12.064368 containerd[1515]: time="2025-05-13T12:38:12.064294824Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-sjhqz,Uid:d3317169-9c1a-4d06-be39-f899c6b6ad94,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e3b756d41f0a2514c8a7eb922fa10dc599849369123b8b04c541a356ac8141c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 12:38:12.068863 kubelet[2762]: E0513 12:38:12.068734 2762 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e3b756d41f0a2514c8a7eb922fa10dc599849369123b8b04c541a356ac8141c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 12:38:12.068863 kubelet[2762]: E0513 12:38:12.068824 2762 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e3b756d41f0a2514c8a7eb922fa10dc599849369123b8b04c541a356ac8141c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-sjhqz" May 13 12:38:12.068863 kubelet[2762]: E0513 12:38:12.068844 2762 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e3b756d41f0a2514c8a7eb922fa10dc599849369123b8b04c541a356ac8141c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-sjhqz" May 13 12:38:12.069019 kubelet[2762]: E0513 12:38:12.068910 2762 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-sjhqz_calico-system(d3317169-9c1a-4d06-be39-f899c6b6ad94)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-sjhqz_calico-system(d3317169-9c1a-4d06-be39-f899c6b6ad94)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e3b756d41f0a2514c8a7eb922fa10dc599849369123b8b04c541a356ac8141c2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-sjhqz" podUID="d3317169-9c1a-4d06-be39-f899c6b6ad94" May 13 12:38:12.076853 containerd[1515]: time="2025-05-13T12:38:12.076811886Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-q8qxk,Uid:598e01bb-31c9-4a2f-89fd-2acd7cde631f,Namespace:kube-system,Attempt:0,}" May 13 12:38:12.079588 containerd[1515]: time="2025-05-13T12:38:12.079473692Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6b78c5754f-xv599,Uid:3c236d1b-5eee-40d9-9284-ae0c63951484,Namespace:calico-system,Attempt:0,}" May 13 12:38:12.098119 containerd[1515]: time="2025-05-13T12:38:12.098068893Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-kxg5d,Uid:1b99018c-e9e6-4d14-a4dc-d064c4bafd96,Namespace:kube-system,Attempt:0,}" May 13 12:38:12.114624 containerd[1515]: time="2025-05-13T12:38:12.114521401Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59c5b9dc8b-hmthn,Uid:33d6eab6-5559-4c8d-ab4d-98247a898dc8,Namespace:calico-apiserver,Attempt:0,}" May 13 12:38:12.123163 containerd[1515]: time="2025-05-13T12:38:12.122832360Z" level=error msg="Failed to destroy network for sandbox \"a1b6015572325d6835ec967547faf2e0fcb8d12f2fb7dc4f7e7003b4009903f1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 12:38:12.126933 containerd[1515]: time="2025-05-13T12:38:12.126533191Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-q8qxk,Uid:598e01bb-31c9-4a2f-89fd-2acd7cde631f,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a1b6015572325d6835ec967547faf2e0fcb8d12f2fb7dc4f7e7003b4009903f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 12:38:12.127251 kubelet[2762]: E0513 12:38:12.127213 2762 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a1b6015572325d6835ec967547faf2e0fcb8d12f2fb7dc4f7e7003b4009903f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 12:38:12.127310 kubelet[2762]: E0513 12:38:12.127270 2762 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a1b6015572325d6835ec967547faf2e0fcb8d12f2fb7dc4f7e7003b4009903f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-q8qxk" May 13 12:38:12.127310 kubelet[2762]: E0513 12:38:12.127293 2762 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a1b6015572325d6835ec967547faf2e0fcb8d12f2fb7dc4f7e7003b4009903f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-q8qxk" May 13 12:38:12.127359 kubelet[2762]: E0513 12:38:12.127327 2762 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-q8qxk_kube-system(598e01bb-31c9-4a2f-89fd-2acd7cde631f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-q8qxk_kube-system(598e01bb-31c9-4a2f-89fd-2acd7cde631f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a1b6015572325d6835ec967547faf2e0fcb8d12f2fb7dc4f7e7003b4009903f1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-q8qxk" podUID="598e01bb-31c9-4a2f-89fd-2acd7cde631f" May 13 12:38:12.132262 containerd[1515]: time="2025-05-13T12:38:12.132224827Z" level=error msg="Failed to destroy network for sandbox \"0dd4c07f1a0c80d87802358fc84d29979507bb960078ac81ab7fd58bf330a9df\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 12:38:12.132792 containerd[1515]: time="2025-05-13T12:38:12.132757420Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59c5b9dc8b-hp2wg,Uid:77933221-0b99-4630-8fad-a6aac7797690,Namespace:calico-apiserver,Attempt:0,}" May 13 12:38:12.133767 containerd[1515]: time="2025-05-13T12:38:12.133734681Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6b78c5754f-xv599,Uid:3c236d1b-5eee-40d9-9284-ae0c63951484,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0dd4c07f1a0c80d87802358fc84d29979507bb960078ac81ab7fd58bf330a9df\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 12:38:12.134395 systemd[1]: run-netns-cni\x2d61f180d5\x2d2987\x2d03d0\x2d91e7\x2d2a32a51f4656.mount: Deactivated successfully. May 13 12:38:12.137801 systemd[1]: run-netns-cni\x2d548200a9\x2d6172\x2db77f\x2dfeac\x2dad55f284b3fc.mount: Deactivated successfully. May 13 12:38:12.137965 kubelet[2762]: E0513 12:38:12.135013 2762 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0dd4c07f1a0c80d87802358fc84d29979507bb960078ac81ab7fd58bf330a9df\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 12:38:12.138024 kubelet[2762]: E0513 12:38:12.137998 2762 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0dd4c07f1a0c80d87802358fc84d29979507bb960078ac81ab7fd58bf330a9df\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6b78c5754f-xv599" May 13 12:38:12.138050 kubelet[2762]: E0513 12:38:12.138035 2762 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0dd4c07f1a0c80d87802358fc84d29979507bb960078ac81ab7fd58bf330a9df\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6b78c5754f-xv599" May 13 12:38:12.138212 kubelet[2762]: E0513 12:38:12.138165 2762 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6b78c5754f-xv599_calico-system(3c236d1b-5eee-40d9-9284-ae0c63951484)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6b78c5754f-xv599_calico-system(3c236d1b-5eee-40d9-9284-ae0c63951484)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0dd4c07f1a0c80d87802358fc84d29979507bb960078ac81ab7fd58bf330a9df\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6b78c5754f-xv599" podUID="3c236d1b-5eee-40d9-9284-ae0c63951484" May 13 12:38:12.163491 containerd[1515]: time="2025-05-13T12:38:12.163438817Z" level=error msg="Failed to destroy network for sandbox \"3e1fc14394ef76f374bdfcacadeb10b23f142a65d46018f06c663e1dcad9d471\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 12:38:12.165452 containerd[1515]: time="2025-05-13T12:38:12.165410780Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-kxg5d,Uid:1b99018c-e9e6-4d14-a4dc-d064c4bafd96,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3e1fc14394ef76f374bdfcacadeb10b23f142a65d46018f06c663e1dcad9d471\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 12:38:12.165703 kubelet[2762]: E0513 12:38:12.165666 2762 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3e1fc14394ef76f374bdfcacadeb10b23f142a65d46018f06c663e1dcad9d471\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 12:38:12.165765 kubelet[2762]: E0513 12:38:12.165726 2762 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3e1fc14394ef76f374bdfcacadeb10b23f142a65d46018f06c663e1dcad9d471\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-kxg5d" May 13 12:38:12.165765 kubelet[2762]: E0513 12:38:12.165748 2762 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3e1fc14394ef76f374bdfcacadeb10b23f142a65d46018f06c663e1dcad9d471\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-kxg5d" May 13 12:38:12.165839 kubelet[2762]: E0513 12:38:12.165809 2762 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-kxg5d_kube-system(1b99018c-e9e6-4d14-a4dc-d064c4bafd96)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-kxg5d_kube-system(1b99018c-e9e6-4d14-a4dc-d064c4bafd96)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3e1fc14394ef76f374bdfcacadeb10b23f142a65d46018f06c663e1dcad9d471\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-kxg5d" podUID="1b99018c-e9e6-4d14-a4dc-d064c4bafd96" May 13 12:38:12.166093 systemd[1]: run-netns-cni\x2d6a379ed3\x2d3f08\x2d912b\x2d3255\x2defd1e92f6734.mount: Deactivated successfully. May 13 12:38:12.186120 containerd[1515]: time="2025-05-13T12:38:12.186061550Z" level=error msg="Failed to destroy network for sandbox \"ad761233a3a5ab4631b15a3556ec579b4b505b56c05ae43deaa1ebfab3818d61\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 12:38:12.188332 systemd[1]: run-netns-cni\x2d1f2af78f\x2dec77\x2ddd11\x2d3d65\x2d4fe9fc29f1b5.mount: Deactivated successfully. May 13 12:38:12.188758 containerd[1515]: time="2025-05-13T12:38:12.188712795Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59c5b9dc8b-hmthn,Uid:33d6eab6-5559-4c8d-ab4d-98247a898dc8,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ad761233a3a5ab4631b15a3556ec579b4b505b56c05ae43deaa1ebfab3818d61\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 12:38:12.189221 kubelet[2762]: E0513 12:38:12.188961 2762 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ad761233a3a5ab4631b15a3556ec579b4b505b56c05ae43deaa1ebfab3818d61\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 12:38:12.189221 kubelet[2762]: E0513 12:38:12.189038 2762 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ad761233a3a5ab4631b15a3556ec579b4b505b56c05ae43deaa1ebfab3818d61\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-59c5b9dc8b-hmthn" May 13 12:38:12.189221 kubelet[2762]: E0513 12:38:12.189060 2762 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ad761233a3a5ab4631b15a3556ec579b4b505b56c05ae43deaa1ebfab3818d61\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-59c5b9dc8b-hmthn" May 13 12:38:12.190008 kubelet[2762]: E0513 12:38:12.189098 2762 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-59c5b9dc8b-hmthn_calico-apiserver(33d6eab6-5559-4c8d-ab4d-98247a898dc8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-59c5b9dc8b-hmthn_calico-apiserver(33d6eab6-5559-4c8d-ab4d-98247a898dc8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ad761233a3a5ab4631b15a3556ec579b4b505b56c05ae43deaa1ebfab3818d61\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-59c5b9dc8b-hmthn" podUID="33d6eab6-5559-4c8d-ab4d-98247a898dc8" May 13 12:38:12.191995 containerd[1515]: time="2025-05-13T12:38:12.191958558Z" level=error msg="Failed to destroy network for sandbox \"f191b940c22e558ce89826190c7569ff671c466b55c066d813b2ec99230ae85c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 12:38:12.193068 containerd[1515]: time="2025-05-13T12:38:12.192962461Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59c5b9dc8b-hp2wg,Uid:77933221-0b99-4630-8fad-a6aac7797690,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f191b940c22e558ce89826190c7569ff671c466b55c066d813b2ec99230ae85c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 12:38:12.193203 kubelet[2762]: E0513 12:38:12.193156 2762 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f191b940c22e558ce89826190c7569ff671c466b55c066d813b2ec99230ae85c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 12:38:12.193248 kubelet[2762]: E0513 12:38:12.193217 2762 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f191b940c22e558ce89826190c7569ff671c466b55c066d813b2ec99230ae85c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-59c5b9dc8b-hp2wg" May 13 12:38:12.193248 kubelet[2762]: E0513 12:38:12.193240 2762 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f191b940c22e558ce89826190c7569ff671c466b55c066d813b2ec99230ae85c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-59c5b9dc8b-hp2wg" May 13 12:38:12.193316 kubelet[2762]: E0513 12:38:12.193289 2762 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-59c5b9dc8b-hp2wg_calico-apiserver(77933221-0b99-4630-8fad-a6aac7797690)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-59c5b9dc8b-hp2wg_calico-apiserver(77933221-0b99-4630-8fad-a6aac7797690)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f191b940c22e558ce89826190c7569ff671c466b55c066d813b2ec99230ae85c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-59c5b9dc8b-hp2wg" podUID="77933221-0b99-4630-8fad-a6aac7797690" May 13 12:38:12.838343 containerd[1515]: time="2025-05-13T12:38:12.838290010Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\"" May 13 12:38:13.125444 systemd[1]: run-netns-cni\x2d1e5653a7\x2d8258\x2dce4e\x2ded33\x2dea7a6e2f4385.mount: Deactivated successfully. May 13 12:38:13.424539 systemd[1]: Started sshd@9-10.0.0.46:22-10.0.0.1:45610.service - OpenSSH per-connection server daemon (10.0.0.1:45610). May 13 12:38:13.485396 sshd[3769]: Accepted publickey for core from 10.0.0.1 port 45610 ssh2: RSA SHA256:HV7SwMkgpUcGbG5PTBCNGAhaEvexdMAt2yN/TIbGAFk May 13 12:38:13.486751 sshd-session[3769]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:38:13.490544 systemd-logind[1485]: New session 10 of user core. May 13 12:38:13.497025 systemd[1]: Started session-10.scope - Session 10 of User core. May 13 12:38:13.606851 sshd[3771]: Connection closed by 10.0.0.1 port 45610 May 13 12:38:13.607228 sshd-session[3769]: pam_unix(sshd:session): session closed for user core May 13 12:38:13.610666 systemd[1]: sshd@9-10.0.0.46:22-10.0.0.1:45610.service: Deactivated successfully. May 13 12:38:13.612232 systemd[1]: session-10.scope: Deactivated successfully. May 13 12:38:13.614237 systemd-logind[1485]: Session 10 logged out. Waiting for processes to exit. May 13 12:38:13.615402 systemd-logind[1485]: Removed session 10. May 13 12:38:18.622213 systemd[1]: Started sshd@10-10.0.0.46:22-10.0.0.1:45616.service - OpenSSH per-connection server daemon (10.0.0.1:45616). May 13 12:38:18.682512 sshd[3792]: Accepted publickey for core from 10.0.0.1 port 45616 ssh2: RSA SHA256:HV7SwMkgpUcGbG5PTBCNGAhaEvexdMAt2yN/TIbGAFk May 13 12:38:18.684216 sshd-session[3792]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:38:18.689997 systemd-logind[1485]: New session 11 of user core. May 13 12:38:18.705097 systemd[1]: Started session-11.scope - Session 11 of User core. May 13 12:38:18.821095 sshd[3794]: Connection closed by 10.0.0.1 port 45616 May 13 12:38:18.821352 sshd-session[3792]: pam_unix(sshd:session): session closed for user core May 13 12:38:18.837359 systemd[1]: sshd@10-10.0.0.46:22-10.0.0.1:45616.service: Deactivated successfully. May 13 12:38:18.839367 systemd[1]: session-11.scope: Deactivated successfully. May 13 12:38:18.840456 systemd-logind[1485]: Session 11 logged out. Waiting for processes to exit. May 13 12:38:18.843812 systemd[1]: Started sshd@11-10.0.0.46:22-10.0.0.1:45620.service - OpenSSH per-connection server daemon (10.0.0.1:45620). May 13 12:38:18.844744 systemd-logind[1485]: Removed session 11. May 13 12:38:18.900523 sshd[3809]: Accepted publickey for core from 10.0.0.1 port 45620 ssh2: RSA SHA256:HV7SwMkgpUcGbG5PTBCNGAhaEvexdMAt2yN/TIbGAFk May 13 12:38:18.901784 sshd-session[3809]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:38:18.906460 systemd-logind[1485]: New session 12 of user core. May 13 12:38:18.916093 systemd[1]: Started session-12.scope - Session 12 of User core. May 13 12:38:19.072790 sshd[3811]: Connection closed by 10.0.0.1 port 45620 May 13 12:38:19.073468 sshd-session[3809]: pam_unix(sshd:session): session closed for user core May 13 12:38:19.086362 systemd[1]: sshd@11-10.0.0.46:22-10.0.0.1:45620.service: Deactivated successfully. May 13 12:38:19.091270 systemd[1]: session-12.scope: Deactivated successfully. May 13 12:38:19.092739 systemd-logind[1485]: Session 12 logged out. Waiting for processes to exit. May 13 12:38:19.097239 systemd[1]: Started sshd@12-10.0.0.46:22-10.0.0.1:45622.service - OpenSSH per-connection server daemon (10.0.0.1:45622). May 13 12:38:19.098106 systemd-logind[1485]: Removed session 12. May 13 12:38:19.155528 sshd[3824]: Accepted publickey for core from 10.0.0.1 port 45622 ssh2: RSA SHA256:HV7SwMkgpUcGbG5PTBCNGAhaEvexdMAt2yN/TIbGAFk May 13 12:38:19.157010 sshd-session[3824]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:38:19.161696 systemd-logind[1485]: New session 13 of user core. May 13 12:38:19.169019 systemd[1]: Started session-13.scope - Session 13 of User core. May 13 12:38:19.297935 sshd[3826]: Connection closed by 10.0.0.1 port 45622 May 13 12:38:19.298642 sshd-session[3824]: pam_unix(sshd:session): session closed for user core May 13 12:38:19.302758 systemd[1]: sshd@12-10.0.0.46:22-10.0.0.1:45622.service: Deactivated successfully. May 13 12:38:19.304918 systemd[1]: session-13.scope: Deactivated successfully. May 13 12:38:19.306208 systemd-logind[1485]: Session 13 logged out. Waiting for processes to exit. May 13 12:38:19.307388 systemd-logind[1485]: Removed session 13. May 13 12:38:20.161156 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3372545953.mount: Deactivated successfully. May 13 12:38:20.257648 containerd[1515]: time="2025-05-13T12:38:20.257589116Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.3: active requests=0, bytes read=138981893" May 13 12:38:20.280961 containerd[1515]: time="2025-05-13T12:38:20.280911368Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:38:20.283502 containerd[1515]: time="2025-05-13T12:38:20.283452455Z" level=info msg="ImageCreate event name:\"sha256:cdcce3ec4624a24c28cdc07b0ee29ddf6703628edee7452a3f8a8b4816bfd057\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:38:20.294680 containerd[1515]: time="2025-05-13T12:38:20.294622736Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:38:20.295195 containerd[1515]: time="2025-05-13T12:38:20.295155203Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.3\" with image id \"sha256:cdcce3ec4624a24c28cdc07b0ee29ddf6703628edee7452a3f8a8b4816bfd057\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\", size \"138981755\" in 7.456827751s" May 13 12:38:20.295195 containerd[1515]: time="2025-05-13T12:38:20.295184845Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\" returns image reference \"sha256:cdcce3ec4624a24c28cdc07b0ee29ddf6703628edee7452a3f8a8b4816bfd057\"" May 13 12:38:20.307342 containerd[1515]: time="2025-05-13T12:38:20.307294933Z" level=info msg="CreateContainer within sandbox \"bd42eb710f59949028fca2588f050766215f73940d3134c8524eeaa0d51606d8\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" May 13 12:38:20.316773 containerd[1515]: time="2025-05-13T12:38:20.316600201Z" level=info msg="Container 95fe744f8779436f2050f937d0231a7dd411169d312f9fed858e4beca72ff58c: CDI devices from CRI Config.CDIDevices: []" May 13 12:38:20.325018 containerd[1515]: time="2025-05-13T12:38:20.324974461Z" level=info msg="CreateContainer within sandbox \"bd42eb710f59949028fca2588f050766215f73940d3134c8524eeaa0d51606d8\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"95fe744f8779436f2050f937d0231a7dd411169d312f9fed858e4beca72ff58c\"" May 13 12:38:20.325758 containerd[1515]: time="2025-05-13T12:38:20.325726099Z" level=info msg="StartContainer for \"95fe744f8779436f2050f937d0231a7dd411169d312f9fed858e4beca72ff58c\"" May 13 12:38:20.327662 containerd[1515]: time="2025-05-13T12:38:20.327623875Z" level=info msg="connecting to shim 95fe744f8779436f2050f937d0231a7dd411169d312f9fed858e4beca72ff58c" address="unix:///run/containerd/s/501bcc4a03278bf95c7a3b66ba1fe74e36fdb5e0de0b7fa5ecb4b76c4b336936" protocol=ttrpc version=3 May 13 12:38:20.347142 systemd[1]: Started cri-containerd-95fe744f8779436f2050f937d0231a7dd411169d312f9fed858e4beca72ff58c.scope - libcontainer container 95fe744f8779436f2050f937d0231a7dd411169d312f9fed858e4beca72ff58c. May 13 12:38:20.378308 containerd[1515]: time="2025-05-13T12:38:20.378275860Z" level=info msg="StartContainer for \"95fe744f8779436f2050f937d0231a7dd411169d312f9fed858e4beca72ff58c\" returns successfully" May 13 12:38:20.586626 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. May 13 12:38:20.586759 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. May 13 12:38:20.877552 kubelet[2762]: I0513 12:38:20.877482 2762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-7hsdq" podStartSLOduration=1.81567552 podStartE2EDuration="25.874552875s" podCreationTimestamp="2025-05-13 12:37:55 +0000 UTC" firstStartedPulling="2025-05-13 12:37:56.237065328 +0000 UTC m=+23.582895774" lastFinishedPulling="2025-05-13 12:38:20.295942683 +0000 UTC m=+47.641773129" observedRunningTime="2025-05-13 12:38:20.873717433 +0000 UTC m=+48.219547879" watchObservedRunningTime="2025-05-13 12:38:20.874552875 +0000 UTC m=+48.220383321" May 13 12:38:20.939448 containerd[1515]: time="2025-05-13T12:38:20.939400413Z" level=info msg="TaskExit event in podsandbox handler container_id:\"95fe744f8779436f2050f937d0231a7dd411169d312f9fed858e4beca72ff58c\" id:\"9889cbb042624ad255dfbd125811f9eb18a858cb2aa5c3e1733bd06681f35c33\" pid:3915 exit_status:1 exited_at:{seconds:1747139900 nanos:938997153}" May 13 12:38:21.972468 containerd[1515]: time="2025-05-13T12:38:21.972420634Z" level=info msg="TaskExit event in podsandbox handler container_id:\"95fe744f8779436f2050f937d0231a7dd411169d312f9fed858e4beca72ff58c\" id:\"7b8998b4676df87085f7ce2e501a0e4475194185876f2bb515978bd9d668522c\" pid:4034 exit_status:1 exited_at:{seconds:1747139901 nanos:972115339}" May 13 12:38:22.144320 systemd-networkd[1429]: vxlan.calico: Link UP May 13 12:38:22.144326 systemd-networkd[1429]: vxlan.calico: Gained carrier May 13 12:38:22.731663 containerd[1515]: time="2025-05-13T12:38:22.731613522Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59c5b9dc8b-hp2wg,Uid:77933221-0b99-4630-8fad-a6aac7797690,Namespace:calico-apiserver,Attempt:0,}" May 13 12:38:23.015930 systemd-networkd[1429]: cali5bfd54f7579: Link UP May 13 12:38:23.016091 systemd-networkd[1429]: cali5bfd54f7579: Gained carrier May 13 12:38:23.029168 containerd[1515]: 2025-05-13 12:38:22.813 [INFO][4151] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--59c5b9dc8b--hp2wg-eth0 calico-apiserver-59c5b9dc8b- calico-apiserver 77933221-0b99-4630-8fad-a6aac7797690 758 0 2025-05-13 12:37:55 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:59c5b9dc8b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-59c5b9dc8b-hp2wg eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali5bfd54f7579 [] []}} ContainerID="caa70eff6b0e66142335fe5bac6b95f0939760b686c6523dd31992712242de1d" Namespace="calico-apiserver" Pod="calico-apiserver-59c5b9dc8b-hp2wg" WorkloadEndpoint="localhost-k8s-calico--apiserver--59c5b9dc8b--hp2wg-" May 13 12:38:23.029168 containerd[1515]: 2025-05-13 12:38:22.814 [INFO][4151] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="caa70eff6b0e66142335fe5bac6b95f0939760b686c6523dd31992712242de1d" Namespace="calico-apiserver" Pod="calico-apiserver-59c5b9dc8b-hp2wg" WorkloadEndpoint="localhost-k8s-calico--apiserver--59c5b9dc8b--hp2wg-eth0" May 13 12:38:23.029168 containerd[1515]: 2025-05-13 12:38:22.950 [INFO][4166] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="caa70eff6b0e66142335fe5bac6b95f0939760b686c6523dd31992712242de1d" HandleID="k8s-pod-network.caa70eff6b0e66142335fe5bac6b95f0939760b686c6523dd31992712242de1d" Workload="localhost-k8s-calico--apiserver--59c5b9dc8b--hp2wg-eth0" May 13 12:38:23.029667 containerd[1515]: 2025-05-13 12:38:22.970 [INFO][4166] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="caa70eff6b0e66142335fe5bac6b95f0939760b686c6523dd31992712242de1d" HandleID="k8s-pod-network.caa70eff6b0e66142335fe5bac6b95f0939760b686c6523dd31992712242de1d" Workload="localhost-k8s-calico--apiserver--59c5b9dc8b--hp2wg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400035b590), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-59c5b9dc8b-hp2wg", "timestamp":"2025-05-13 12:38:22.947649455 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 13 12:38:23.029667 containerd[1515]: 2025-05-13 12:38:22.974 [INFO][4166] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 12:38:23.029667 containerd[1515]: 2025-05-13 12:38:22.974 [INFO][4166] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 12:38:23.029667 containerd[1515]: 2025-05-13 12:38:22.974 [INFO][4166] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 13 12:38:23.029667 containerd[1515]: 2025-05-13 12:38:22.976 [INFO][4166] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.caa70eff6b0e66142335fe5bac6b95f0939760b686c6523dd31992712242de1d" host="localhost" May 13 12:38:23.029667 containerd[1515]: 2025-05-13 12:38:22.989 [INFO][4166] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 13 12:38:23.029667 containerd[1515]: 2025-05-13 12:38:22.997 [INFO][4166] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 13 12:38:23.029667 containerd[1515]: 2025-05-13 12:38:22.999 [INFO][4166] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 13 12:38:23.029667 containerd[1515]: 2025-05-13 12:38:23.000 [INFO][4166] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 13 12:38:23.029667 containerd[1515]: 2025-05-13 12:38:23.001 [INFO][4166] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.caa70eff6b0e66142335fe5bac6b95f0939760b686c6523dd31992712242de1d" host="localhost" May 13 12:38:23.029907 containerd[1515]: 2025-05-13 12:38:23.002 [INFO][4166] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.caa70eff6b0e66142335fe5bac6b95f0939760b686c6523dd31992712242de1d May 13 12:38:23.029907 containerd[1515]: 2025-05-13 12:38:23.005 [INFO][4166] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.caa70eff6b0e66142335fe5bac6b95f0939760b686c6523dd31992712242de1d" host="localhost" May 13 12:38:23.029907 containerd[1515]: 2025-05-13 12:38:23.009 [INFO][4166] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.caa70eff6b0e66142335fe5bac6b95f0939760b686c6523dd31992712242de1d" host="localhost" May 13 12:38:23.029907 containerd[1515]: 2025-05-13 12:38:23.009 [INFO][4166] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.caa70eff6b0e66142335fe5bac6b95f0939760b686c6523dd31992712242de1d" host="localhost" May 13 12:38:23.029907 containerd[1515]: 2025-05-13 12:38:23.009 [INFO][4166] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 12:38:23.029907 containerd[1515]: 2025-05-13 12:38:23.009 [INFO][4166] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="caa70eff6b0e66142335fe5bac6b95f0939760b686c6523dd31992712242de1d" HandleID="k8s-pod-network.caa70eff6b0e66142335fe5bac6b95f0939760b686c6523dd31992712242de1d" Workload="localhost-k8s-calico--apiserver--59c5b9dc8b--hp2wg-eth0" May 13 12:38:23.030030 containerd[1515]: 2025-05-13 12:38:23.012 [INFO][4151] cni-plugin/k8s.go 386: Populated endpoint ContainerID="caa70eff6b0e66142335fe5bac6b95f0939760b686c6523dd31992712242de1d" Namespace="calico-apiserver" Pod="calico-apiserver-59c5b9dc8b-hp2wg" WorkloadEndpoint="localhost-k8s-calico--apiserver--59c5b9dc8b--hp2wg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--59c5b9dc8b--hp2wg-eth0", GenerateName:"calico-apiserver-59c5b9dc8b-", Namespace:"calico-apiserver", SelfLink:"", UID:"77933221-0b99-4630-8fad-a6aac7797690", ResourceVersion:"758", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 12, 37, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"59c5b9dc8b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-59c5b9dc8b-hp2wg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5bfd54f7579", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 12:38:23.030080 containerd[1515]: 2025-05-13 12:38:23.012 [INFO][4151] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="caa70eff6b0e66142335fe5bac6b95f0939760b686c6523dd31992712242de1d" Namespace="calico-apiserver" Pod="calico-apiserver-59c5b9dc8b-hp2wg" WorkloadEndpoint="localhost-k8s-calico--apiserver--59c5b9dc8b--hp2wg-eth0" May 13 12:38:23.030080 containerd[1515]: 2025-05-13 12:38:23.013 [INFO][4151] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5bfd54f7579 ContainerID="caa70eff6b0e66142335fe5bac6b95f0939760b686c6523dd31992712242de1d" Namespace="calico-apiserver" Pod="calico-apiserver-59c5b9dc8b-hp2wg" WorkloadEndpoint="localhost-k8s-calico--apiserver--59c5b9dc8b--hp2wg-eth0" May 13 12:38:23.030080 containerd[1515]: 2025-05-13 12:38:23.016 [INFO][4151] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="caa70eff6b0e66142335fe5bac6b95f0939760b686c6523dd31992712242de1d" Namespace="calico-apiserver" Pod="calico-apiserver-59c5b9dc8b-hp2wg" WorkloadEndpoint="localhost-k8s-calico--apiserver--59c5b9dc8b--hp2wg-eth0" May 13 12:38:23.030145 containerd[1515]: 2025-05-13 12:38:23.016 [INFO][4151] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="caa70eff6b0e66142335fe5bac6b95f0939760b686c6523dd31992712242de1d" Namespace="calico-apiserver" Pod="calico-apiserver-59c5b9dc8b-hp2wg" WorkloadEndpoint="localhost-k8s-calico--apiserver--59c5b9dc8b--hp2wg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--59c5b9dc8b--hp2wg-eth0", GenerateName:"calico-apiserver-59c5b9dc8b-", Namespace:"calico-apiserver", SelfLink:"", UID:"77933221-0b99-4630-8fad-a6aac7797690", ResourceVersion:"758", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 12, 37, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"59c5b9dc8b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"caa70eff6b0e66142335fe5bac6b95f0939760b686c6523dd31992712242de1d", Pod:"calico-apiserver-59c5b9dc8b-hp2wg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5bfd54f7579", MAC:"d6:05:86:ae:e6:20", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 12:38:23.030193 containerd[1515]: 2025-05-13 12:38:23.026 [INFO][4151] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="caa70eff6b0e66142335fe5bac6b95f0939760b686c6523dd31992712242de1d" Namespace="calico-apiserver" Pod="calico-apiserver-59c5b9dc8b-hp2wg" WorkloadEndpoint="localhost-k8s-calico--apiserver--59c5b9dc8b--hp2wg-eth0" May 13 12:38:23.159197 containerd[1515]: time="2025-05-13T12:38:23.159150015Z" level=info msg="connecting to shim caa70eff6b0e66142335fe5bac6b95f0939760b686c6523dd31992712242de1d" address="unix:///run/containerd/s/906fd11ff687b0c9dea372aa071d22d6b206d21a8a7bb0c2d3cedb6d48a8b9d0" namespace=k8s.io protocol=ttrpc version=3 May 13 12:38:23.183055 systemd[1]: Started cri-containerd-caa70eff6b0e66142335fe5bac6b95f0939760b686c6523dd31992712242de1d.scope - libcontainer container caa70eff6b0e66142335fe5bac6b95f0939760b686c6523dd31992712242de1d. May 13 12:38:23.192680 systemd-resolved[1356]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 13 12:38:23.213997 systemd-networkd[1429]: vxlan.calico: Gained IPv6LL May 13 12:38:23.214514 containerd[1515]: time="2025-05-13T12:38:23.214467662Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59c5b9dc8b-hp2wg,Uid:77933221-0b99-4630-8fad-a6aac7797690,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"caa70eff6b0e66142335fe5bac6b95f0939760b686c6523dd31992712242de1d\"" May 13 12:38:23.224754 containerd[1515]: time="2025-05-13T12:38:23.224723138Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" May 13 12:38:24.109048 systemd-networkd[1429]: cali5bfd54f7579: Gained IPv6LL May 13 12:38:24.313044 systemd[1]: Started sshd@13-10.0.0.46:22-10.0.0.1:47854.service - OpenSSH per-connection server daemon (10.0.0.1:47854). May 13 12:38:24.372599 sshd[4238]: Accepted publickey for core from 10.0.0.1 port 47854 ssh2: RSA SHA256:HV7SwMkgpUcGbG5PTBCNGAhaEvexdMAt2yN/TIbGAFk May 13 12:38:24.373881 sshd-session[4238]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:38:24.377761 systemd-logind[1485]: New session 14 of user core. May 13 12:38:24.389070 systemd[1]: Started session-14.scope - Session 14 of User core. May 13 12:38:24.502869 sshd[4240]: Connection closed by 10.0.0.1 port 47854 May 13 12:38:24.503791 sshd-session[4238]: pam_unix(sshd:session): session closed for user core May 13 12:38:24.506751 systemd[1]: sshd@13-10.0.0.46:22-10.0.0.1:47854.service: Deactivated successfully. May 13 12:38:24.508835 systemd[1]: session-14.scope: Deactivated successfully. May 13 12:38:24.510798 systemd-logind[1485]: Session 14 logged out. Waiting for processes to exit. May 13 12:38:24.512586 systemd-logind[1485]: Removed session 14. May 13 12:38:24.731733 containerd[1515]: time="2025-05-13T12:38:24.731626230Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-sjhqz,Uid:d3317169-9c1a-4d06-be39-f899c6b6ad94,Namespace:calico-system,Attempt:0,}" May 13 12:38:24.839347 systemd-networkd[1429]: cali715eec62505: Link UP May 13 12:38:24.840141 systemd-networkd[1429]: cali715eec62505: Gained carrier May 13 12:38:24.853657 containerd[1515]: 2025-05-13 12:38:24.770 [INFO][4253] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--sjhqz-eth0 csi-node-driver- calico-system d3317169-9c1a-4d06-be39-f899c6b6ad94 607 0 2025-05-13 12:37:55 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:55b7b4b9d k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-sjhqz eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali715eec62505 [] []}} ContainerID="a304f4d674fe581e1ccd5aba67d62e5564b42d841fd1b3ae2099aee586e96468" Namespace="calico-system" Pod="csi-node-driver-sjhqz" WorkloadEndpoint="localhost-k8s-csi--node--driver--sjhqz-" May 13 12:38:24.853657 containerd[1515]: 2025-05-13 12:38:24.771 [INFO][4253] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="a304f4d674fe581e1ccd5aba67d62e5564b42d841fd1b3ae2099aee586e96468" Namespace="calico-system" Pod="csi-node-driver-sjhqz" WorkloadEndpoint="localhost-k8s-csi--node--driver--sjhqz-eth0" May 13 12:38:24.853657 containerd[1515]: 2025-05-13 12:38:24.797 [INFO][4267] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a304f4d674fe581e1ccd5aba67d62e5564b42d841fd1b3ae2099aee586e96468" HandleID="k8s-pod-network.a304f4d674fe581e1ccd5aba67d62e5564b42d841fd1b3ae2099aee586e96468" Workload="localhost-k8s-csi--node--driver--sjhqz-eth0" May 13 12:38:24.853960 containerd[1515]: 2025-05-13 12:38:24.809 [INFO][4267] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a304f4d674fe581e1ccd5aba67d62e5564b42d841fd1b3ae2099aee586e96468" HandleID="k8s-pod-network.a304f4d674fe581e1ccd5aba67d62e5564b42d841fd1b3ae2099aee586e96468" Workload="localhost-k8s-csi--node--driver--sjhqz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400011cf40), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-sjhqz", "timestamp":"2025-05-13 12:38:24.797964069 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 13 12:38:24.853960 containerd[1515]: 2025-05-13 12:38:24.809 [INFO][4267] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 12:38:24.853960 containerd[1515]: 2025-05-13 12:38:24.809 [INFO][4267] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 12:38:24.853960 containerd[1515]: 2025-05-13 12:38:24.809 [INFO][4267] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 13 12:38:24.853960 containerd[1515]: 2025-05-13 12:38:24.810 [INFO][4267] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.a304f4d674fe581e1ccd5aba67d62e5564b42d841fd1b3ae2099aee586e96468" host="localhost" May 13 12:38:24.853960 containerd[1515]: 2025-05-13 12:38:24.814 [INFO][4267] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 13 12:38:24.853960 containerd[1515]: 2025-05-13 12:38:24.818 [INFO][4267] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 13 12:38:24.853960 containerd[1515]: 2025-05-13 12:38:24.820 [INFO][4267] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 13 12:38:24.853960 containerd[1515]: 2025-05-13 12:38:24.822 [INFO][4267] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 13 12:38:24.853960 containerd[1515]: 2025-05-13 12:38:24.822 [INFO][4267] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a304f4d674fe581e1ccd5aba67d62e5564b42d841fd1b3ae2099aee586e96468" host="localhost" May 13 12:38:24.854271 containerd[1515]: 2025-05-13 12:38:24.824 [INFO][4267] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.a304f4d674fe581e1ccd5aba67d62e5564b42d841fd1b3ae2099aee586e96468 May 13 12:38:24.854271 containerd[1515]: 2025-05-13 12:38:24.827 [INFO][4267] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a304f4d674fe581e1ccd5aba67d62e5564b42d841fd1b3ae2099aee586e96468" host="localhost" May 13 12:38:24.854271 containerd[1515]: 2025-05-13 12:38:24.833 [INFO][4267] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.a304f4d674fe581e1ccd5aba67d62e5564b42d841fd1b3ae2099aee586e96468" host="localhost" May 13 12:38:24.854271 containerd[1515]: 2025-05-13 12:38:24.833 [INFO][4267] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.a304f4d674fe581e1ccd5aba67d62e5564b42d841fd1b3ae2099aee586e96468" host="localhost" May 13 12:38:24.854271 containerd[1515]: 2025-05-13 12:38:24.833 [INFO][4267] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 12:38:24.854271 containerd[1515]: 2025-05-13 12:38:24.833 [INFO][4267] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="a304f4d674fe581e1ccd5aba67d62e5564b42d841fd1b3ae2099aee586e96468" HandleID="k8s-pod-network.a304f4d674fe581e1ccd5aba67d62e5564b42d841fd1b3ae2099aee586e96468" Workload="localhost-k8s-csi--node--driver--sjhqz-eth0" May 13 12:38:24.854427 containerd[1515]: 2025-05-13 12:38:24.836 [INFO][4253] cni-plugin/k8s.go 386: Populated endpoint ContainerID="a304f4d674fe581e1ccd5aba67d62e5564b42d841fd1b3ae2099aee586e96468" Namespace="calico-system" Pod="csi-node-driver-sjhqz" WorkloadEndpoint="localhost-k8s-csi--node--driver--sjhqz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--sjhqz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d3317169-9c1a-4d06-be39-f899c6b6ad94", ResourceVersion:"607", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 12, 37, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-sjhqz", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali715eec62505", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 12:38:24.854427 containerd[1515]: 2025-05-13 12:38:24.836 [INFO][4253] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="a304f4d674fe581e1ccd5aba67d62e5564b42d841fd1b3ae2099aee586e96468" Namespace="calico-system" Pod="csi-node-driver-sjhqz" WorkloadEndpoint="localhost-k8s-csi--node--driver--sjhqz-eth0" May 13 12:38:24.854592 containerd[1515]: 2025-05-13 12:38:24.836 [INFO][4253] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali715eec62505 ContainerID="a304f4d674fe581e1ccd5aba67d62e5564b42d841fd1b3ae2099aee586e96468" Namespace="calico-system" Pod="csi-node-driver-sjhqz" WorkloadEndpoint="localhost-k8s-csi--node--driver--sjhqz-eth0" May 13 12:38:24.854592 containerd[1515]: 2025-05-13 12:38:24.840 [INFO][4253] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a304f4d674fe581e1ccd5aba67d62e5564b42d841fd1b3ae2099aee586e96468" Namespace="calico-system" Pod="csi-node-driver-sjhqz" WorkloadEndpoint="localhost-k8s-csi--node--driver--sjhqz-eth0" May 13 12:38:24.854738 containerd[1515]: 2025-05-13 12:38:24.841 [INFO][4253] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="a304f4d674fe581e1ccd5aba67d62e5564b42d841fd1b3ae2099aee586e96468" Namespace="calico-system" Pod="csi-node-driver-sjhqz" WorkloadEndpoint="localhost-k8s-csi--node--driver--sjhqz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--sjhqz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d3317169-9c1a-4d06-be39-f899c6b6ad94", ResourceVersion:"607", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 12, 37, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a304f4d674fe581e1ccd5aba67d62e5564b42d841fd1b3ae2099aee586e96468", Pod:"csi-node-driver-sjhqz", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali715eec62505", MAC:"6a:1b:7e:03:4b:1a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 12:38:24.854871 containerd[1515]: 2025-05-13 12:38:24.850 [INFO][4253] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="a304f4d674fe581e1ccd5aba67d62e5564b42d841fd1b3ae2099aee586e96468" Namespace="calico-system" Pod="csi-node-driver-sjhqz" WorkloadEndpoint="localhost-k8s-csi--node--driver--sjhqz-eth0" May 13 12:38:24.873112 containerd[1515]: time="2025-05-13T12:38:24.873065064Z" level=info msg="connecting to shim a304f4d674fe581e1ccd5aba67d62e5564b42d841fd1b3ae2099aee586e96468" address="unix:///run/containerd/s/55beb3c7578776b4783457ebb78ac1fe7e5d8dea0db5f39529d33b95ea2f6766" namespace=k8s.io protocol=ttrpc version=3 May 13 12:38:24.908065 systemd[1]: Started cri-containerd-a304f4d674fe581e1ccd5aba67d62e5564b42d841fd1b3ae2099aee586e96468.scope - libcontainer container a304f4d674fe581e1ccd5aba67d62e5564b42d841fd1b3ae2099aee586e96468. May 13 12:38:24.918671 systemd-resolved[1356]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 13 12:38:24.930180 containerd[1515]: time="2025-05-13T12:38:24.930140724Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-sjhqz,Uid:d3317169-9c1a-4d06-be39-f899c6b6ad94,Namespace:calico-system,Attempt:0,} returns sandbox id \"a304f4d674fe581e1ccd5aba67d62e5564b42d841fd1b3ae2099aee586e96468\"" May 13 12:38:25.731773 containerd[1515]: time="2025-05-13T12:38:25.731689909Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6b78c5754f-xv599,Uid:3c236d1b-5eee-40d9-9284-ae0c63951484,Namespace:calico-system,Attempt:0,}" May 13 12:38:25.731773 containerd[1515]: time="2025-05-13T12:38:25.731708630Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59c5b9dc8b-hmthn,Uid:33d6eab6-5559-4c8d-ab4d-98247a898dc8,Namespace:calico-apiserver,Attempt:0,}" May 13 12:38:25.839076 systemd-networkd[1429]: calia3bc0f074a9: Link UP May 13 12:38:25.840191 systemd-networkd[1429]: calia3bc0f074a9: Gained carrier May 13 12:38:25.853907 containerd[1515]: 2025-05-13 12:38:25.768 [INFO][4351] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--59c5b9dc8b--hmthn-eth0 calico-apiserver-59c5b9dc8b- calico-apiserver 33d6eab6-5559-4c8d-ab4d-98247a898dc8 757 0 2025-05-13 12:37:55 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:59c5b9dc8b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-59c5b9dc8b-hmthn eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calia3bc0f074a9 [] []}} ContainerID="24ef4fed0fd0f5f28658e423f8be578bb7e553b91da51988af483fc1a0fe8286" Namespace="calico-apiserver" Pod="calico-apiserver-59c5b9dc8b-hmthn" WorkloadEndpoint="localhost-k8s-calico--apiserver--59c5b9dc8b--hmthn-" May 13 12:38:25.853907 containerd[1515]: 2025-05-13 12:38:25.769 [INFO][4351] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="24ef4fed0fd0f5f28658e423f8be578bb7e553b91da51988af483fc1a0fe8286" Namespace="calico-apiserver" Pod="calico-apiserver-59c5b9dc8b-hmthn" WorkloadEndpoint="localhost-k8s-calico--apiserver--59c5b9dc8b--hmthn-eth0" May 13 12:38:25.853907 containerd[1515]: 2025-05-13 12:38:25.797 [INFO][4366] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="24ef4fed0fd0f5f28658e423f8be578bb7e553b91da51988af483fc1a0fe8286" HandleID="k8s-pod-network.24ef4fed0fd0f5f28658e423f8be578bb7e553b91da51988af483fc1a0fe8286" Workload="localhost-k8s-calico--apiserver--59c5b9dc8b--hmthn-eth0" May 13 12:38:25.854194 containerd[1515]: 2025-05-13 12:38:25.810 [INFO][4366] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="24ef4fed0fd0f5f28658e423f8be578bb7e553b91da51988af483fc1a0fe8286" HandleID="k8s-pod-network.24ef4fed0fd0f5f28658e423f8be578bb7e553b91da51988af483fc1a0fe8286" Workload="localhost-k8s-calico--apiserver--59c5b9dc8b--hmthn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40004444d0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-59c5b9dc8b-hmthn", "timestamp":"2025-05-13 12:38:25.797368681 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 13 12:38:25.854194 containerd[1515]: 2025-05-13 12:38:25.810 [INFO][4366] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 12:38:25.854194 containerd[1515]: 2025-05-13 12:38:25.810 [INFO][4366] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 12:38:25.854194 containerd[1515]: 2025-05-13 12:38:25.810 [INFO][4366] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 13 12:38:25.854194 containerd[1515]: 2025-05-13 12:38:25.812 [INFO][4366] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.24ef4fed0fd0f5f28658e423f8be578bb7e553b91da51988af483fc1a0fe8286" host="localhost" May 13 12:38:25.854194 containerd[1515]: 2025-05-13 12:38:25.816 [INFO][4366] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 13 12:38:25.854194 containerd[1515]: 2025-05-13 12:38:25.820 [INFO][4366] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 13 12:38:25.854194 containerd[1515]: 2025-05-13 12:38:25.821 [INFO][4366] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 13 12:38:25.854194 containerd[1515]: 2025-05-13 12:38:25.823 [INFO][4366] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 13 12:38:25.854194 containerd[1515]: 2025-05-13 12:38:25.823 [INFO][4366] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.24ef4fed0fd0f5f28658e423f8be578bb7e553b91da51988af483fc1a0fe8286" host="localhost" May 13 12:38:25.854552 containerd[1515]: 2025-05-13 12:38:25.825 [INFO][4366] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.24ef4fed0fd0f5f28658e423f8be578bb7e553b91da51988af483fc1a0fe8286 May 13 12:38:25.854552 containerd[1515]: 2025-05-13 12:38:25.828 [INFO][4366] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.24ef4fed0fd0f5f28658e423f8be578bb7e553b91da51988af483fc1a0fe8286" host="localhost" May 13 12:38:25.854552 containerd[1515]: 2025-05-13 12:38:25.832 [INFO][4366] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.24ef4fed0fd0f5f28658e423f8be578bb7e553b91da51988af483fc1a0fe8286" host="localhost" May 13 12:38:25.854552 containerd[1515]: 2025-05-13 12:38:25.832 [INFO][4366] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.24ef4fed0fd0f5f28658e423f8be578bb7e553b91da51988af483fc1a0fe8286" host="localhost" May 13 12:38:25.854552 containerd[1515]: 2025-05-13 12:38:25.833 [INFO][4366] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 12:38:25.854552 containerd[1515]: 2025-05-13 12:38:25.833 [INFO][4366] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="24ef4fed0fd0f5f28658e423f8be578bb7e553b91da51988af483fc1a0fe8286" HandleID="k8s-pod-network.24ef4fed0fd0f5f28658e423f8be578bb7e553b91da51988af483fc1a0fe8286" Workload="localhost-k8s-calico--apiserver--59c5b9dc8b--hmthn-eth0" May 13 12:38:25.854734 containerd[1515]: 2025-05-13 12:38:25.836 [INFO][4351] cni-plugin/k8s.go 386: Populated endpoint ContainerID="24ef4fed0fd0f5f28658e423f8be578bb7e553b91da51988af483fc1a0fe8286" Namespace="calico-apiserver" Pod="calico-apiserver-59c5b9dc8b-hmthn" WorkloadEndpoint="localhost-k8s-calico--apiserver--59c5b9dc8b--hmthn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--59c5b9dc8b--hmthn-eth0", GenerateName:"calico-apiserver-59c5b9dc8b-", Namespace:"calico-apiserver", SelfLink:"", UID:"33d6eab6-5559-4c8d-ab4d-98247a898dc8", ResourceVersion:"757", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 12, 37, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"59c5b9dc8b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-59c5b9dc8b-hmthn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia3bc0f074a9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 12:38:25.854790 containerd[1515]: 2025-05-13 12:38:25.836 [INFO][4351] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="24ef4fed0fd0f5f28658e423f8be578bb7e553b91da51988af483fc1a0fe8286" Namespace="calico-apiserver" Pod="calico-apiserver-59c5b9dc8b-hmthn" WorkloadEndpoint="localhost-k8s-calico--apiserver--59c5b9dc8b--hmthn-eth0" May 13 12:38:25.854790 containerd[1515]: 2025-05-13 12:38:25.836 [INFO][4351] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia3bc0f074a9 ContainerID="24ef4fed0fd0f5f28658e423f8be578bb7e553b91da51988af483fc1a0fe8286" Namespace="calico-apiserver" Pod="calico-apiserver-59c5b9dc8b-hmthn" WorkloadEndpoint="localhost-k8s-calico--apiserver--59c5b9dc8b--hmthn-eth0" May 13 12:38:25.854790 containerd[1515]: 2025-05-13 12:38:25.839 [INFO][4351] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="24ef4fed0fd0f5f28658e423f8be578bb7e553b91da51988af483fc1a0fe8286" Namespace="calico-apiserver" Pod="calico-apiserver-59c5b9dc8b-hmthn" WorkloadEndpoint="localhost-k8s-calico--apiserver--59c5b9dc8b--hmthn-eth0" May 13 12:38:25.854854 containerd[1515]: 2025-05-13 12:38:25.841 [INFO][4351] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="24ef4fed0fd0f5f28658e423f8be578bb7e553b91da51988af483fc1a0fe8286" Namespace="calico-apiserver" Pod="calico-apiserver-59c5b9dc8b-hmthn" WorkloadEndpoint="localhost-k8s-calico--apiserver--59c5b9dc8b--hmthn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--59c5b9dc8b--hmthn-eth0", GenerateName:"calico-apiserver-59c5b9dc8b-", Namespace:"calico-apiserver", SelfLink:"", UID:"33d6eab6-5559-4c8d-ab4d-98247a898dc8", ResourceVersion:"757", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 12, 37, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"59c5b9dc8b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"24ef4fed0fd0f5f28658e423f8be578bb7e553b91da51988af483fc1a0fe8286", Pod:"calico-apiserver-59c5b9dc8b-hmthn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia3bc0f074a9", MAC:"7e:30:55:38:ff:70", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 12:38:25.854925 containerd[1515]: 2025-05-13 12:38:25.851 [INFO][4351] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="24ef4fed0fd0f5f28658e423f8be578bb7e553b91da51988af483fc1a0fe8286" Namespace="calico-apiserver" Pod="calico-apiserver-59c5b9dc8b-hmthn" WorkloadEndpoint="localhost-k8s-calico--apiserver--59c5b9dc8b--hmthn-eth0" May 13 12:38:25.875725 systemd-networkd[1429]: cali8816a5efffc: Link UP May 13 12:38:25.876377 systemd-networkd[1429]: cali8816a5efffc: Gained carrier May 13 12:38:25.877827 containerd[1515]: time="2025-05-13T12:38:25.877770862Z" level=info msg="connecting to shim 24ef4fed0fd0f5f28658e423f8be578bb7e553b91da51988af483fc1a0fe8286" address="unix:///run/containerd/s/3a547facf60b3ba90804d3da0291b0bd3b3d0bf1633a9c760cb69759cb4c1d34" namespace=k8s.io protocol=ttrpc version=3 May 13 12:38:25.893106 containerd[1515]: 2025-05-13 12:38:25.772 [INFO][4338] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--6b78c5754f--xv599-eth0 calico-kube-controllers-6b78c5754f- calico-system 3c236d1b-5eee-40d9-9284-ae0c63951484 754 0 2025-05-13 12:37:55 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6b78c5754f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-6b78c5754f-xv599 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali8816a5efffc [] []}} ContainerID="b32b0dd57638cea013d7e165f0692a3d454ef38bd1f155f344b1b81193f001cf" Namespace="calico-system" Pod="calico-kube-controllers-6b78c5754f-xv599" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6b78c5754f--xv599-" May 13 12:38:25.893106 containerd[1515]: 2025-05-13 12:38:25.773 [INFO][4338] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="b32b0dd57638cea013d7e165f0692a3d454ef38bd1f155f344b1b81193f001cf" Namespace="calico-system" Pod="calico-kube-controllers-6b78c5754f-xv599" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6b78c5754f--xv599-eth0" May 13 12:38:25.893106 containerd[1515]: 2025-05-13 12:38:25.802 [INFO][4372] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b32b0dd57638cea013d7e165f0692a3d454ef38bd1f155f344b1b81193f001cf" HandleID="k8s-pod-network.b32b0dd57638cea013d7e165f0692a3d454ef38bd1f155f344b1b81193f001cf" Workload="localhost-k8s-calico--kube--controllers--6b78c5754f--xv599-eth0" May 13 12:38:25.893332 containerd[1515]: 2025-05-13 12:38:25.814 [INFO][4372] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b32b0dd57638cea013d7e165f0692a3d454ef38bd1f155f344b1b81193f001cf" HandleID="k8s-pod-network.b32b0dd57638cea013d7e165f0692a3d454ef38bd1f155f344b1b81193f001cf" Workload="localhost-k8s-calico--kube--controllers--6b78c5754f--xv599-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004d1b0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-6b78c5754f-xv599", "timestamp":"2025-05-13 12:38:25.802132171 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 13 12:38:25.893332 containerd[1515]: 2025-05-13 12:38:25.814 [INFO][4372] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 12:38:25.893332 containerd[1515]: 2025-05-13 12:38:25.833 [INFO][4372] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 12:38:25.893332 containerd[1515]: 2025-05-13 12:38:25.833 [INFO][4372] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 13 12:38:25.893332 containerd[1515]: 2025-05-13 12:38:25.834 [INFO][4372] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.b32b0dd57638cea013d7e165f0692a3d454ef38bd1f155f344b1b81193f001cf" host="localhost" May 13 12:38:25.893332 containerd[1515]: 2025-05-13 12:38:25.840 [INFO][4372] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 13 12:38:25.893332 containerd[1515]: 2025-05-13 12:38:25.845 [INFO][4372] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 13 12:38:25.893332 containerd[1515]: 2025-05-13 12:38:25.849 [INFO][4372] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 13 12:38:25.893332 containerd[1515]: 2025-05-13 12:38:25.853 [INFO][4372] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 13 12:38:25.893332 containerd[1515]: 2025-05-13 12:38:25.853 [INFO][4372] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b32b0dd57638cea013d7e165f0692a3d454ef38bd1f155f344b1b81193f001cf" host="localhost" May 13 12:38:25.893542 containerd[1515]: 2025-05-13 12:38:25.854 [INFO][4372] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.b32b0dd57638cea013d7e165f0692a3d454ef38bd1f155f344b1b81193f001cf May 13 12:38:25.893542 containerd[1515]: 2025-05-13 12:38:25.859 [INFO][4372] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b32b0dd57638cea013d7e165f0692a3d454ef38bd1f155f344b1b81193f001cf" host="localhost" May 13 12:38:25.893542 containerd[1515]: 2025-05-13 12:38:25.866 [INFO][4372] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.b32b0dd57638cea013d7e165f0692a3d454ef38bd1f155f344b1b81193f001cf" host="localhost" May 13 12:38:25.893542 containerd[1515]: 2025-05-13 12:38:25.866 [INFO][4372] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.b32b0dd57638cea013d7e165f0692a3d454ef38bd1f155f344b1b81193f001cf" host="localhost" May 13 12:38:25.893542 containerd[1515]: 2025-05-13 12:38:25.866 [INFO][4372] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 12:38:25.893542 containerd[1515]: 2025-05-13 12:38:25.866 [INFO][4372] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="b32b0dd57638cea013d7e165f0692a3d454ef38bd1f155f344b1b81193f001cf" HandleID="k8s-pod-network.b32b0dd57638cea013d7e165f0692a3d454ef38bd1f155f344b1b81193f001cf" Workload="localhost-k8s-calico--kube--controllers--6b78c5754f--xv599-eth0" May 13 12:38:25.893649 containerd[1515]: 2025-05-13 12:38:25.868 [INFO][4338] cni-plugin/k8s.go 386: Populated endpoint ContainerID="b32b0dd57638cea013d7e165f0692a3d454ef38bd1f155f344b1b81193f001cf" Namespace="calico-system" Pod="calico-kube-controllers-6b78c5754f-xv599" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6b78c5754f--xv599-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6b78c5754f--xv599-eth0", GenerateName:"calico-kube-controllers-6b78c5754f-", Namespace:"calico-system", SelfLink:"", UID:"3c236d1b-5eee-40d9-9284-ae0c63951484", ResourceVersion:"754", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 12, 37, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6b78c5754f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-6b78c5754f-xv599", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali8816a5efffc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 12:38:25.893698 containerd[1515]: 2025-05-13 12:38:25.869 [INFO][4338] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="b32b0dd57638cea013d7e165f0692a3d454ef38bd1f155f344b1b81193f001cf" Namespace="calico-system" Pod="calico-kube-controllers-6b78c5754f-xv599" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6b78c5754f--xv599-eth0" May 13 12:38:25.893698 containerd[1515]: 2025-05-13 12:38:25.869 [INFO][4338] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8816a5efffc ContainerID="b32b0dd57638cea013d7e165f0692a3d454ef38bd1f155f344b1b81193f001cf" Namespace="calico-system" Pod="calico-kube-controllers-6b78c5754f-xv599" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6b78c5754f--xv599-eth0" May 13 12:38:25.893698 containerd[1515]: 2025-05-13 12:38:25.876 [INFO][4338] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b32b0dd57638cea013d7e165f0692a3d454ef38bd1f155f344b1b81193f001cf" Namespace="calico-system" Pod="calico-kube-controllers-6b78c5754f-xv599" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6b78c5754f--xv599-eth0" May 13 12:38:25.893757 containerd[1515]: 2025-05-13 12:38:25.877 [INFO][4338] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="b32b0dd57638cea013d7e165f0692a3d454ef38bd1f155f344b1b81193f001cf" Namespace="calico-system" Pod="calico-kube-controllers-6b78c5754f-xv599" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6b78c5754f--xv599-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6b78c5754f--xv599-eth0", GenerateName:"calico-kube-controllers-6b78c5754f-", Namespace:"calico-system", SelfLink:"", UID:"3c236d1b-5eee-40d9-9284-ae0c63951484", ResourceVersion:"754", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 12, 37, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6b78c5754f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b32b0dd57638cea013d7e165f0692a3d454ef38bd1f155f344b1b81193f001cf", Pod:"calico-kube-controllers-6b78c5754f-xv599", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali8816a5efffc", MAC:"a6:cb:d4:08:ff:ab", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 12:38:25.893802 containerd[1515]: 2025-05-13 12:38:25.889 [INFO][4338] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="b32b0dd57638cea013d7e165f0692a3d454ef38bd1f155f344b1b81193f001cf" Namespace="calico-system" Pod="calico-kube-controllers-6b78c5754f-xv599" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6b78c5754f--xv599-eth0" May 13 12:38:25.901340 systemd[1]: Started cri-containerd-24ef4fed0fd0f5f28658e423f8be578bb7e553b91da51988af483fc1a0fe8286.scope - libcontainer container 24ef4fed0fd0f5f28658e423f8be578bb7e553b91da51988af483fc1a0fe8286. May 13 12:38:25.916562 systemd-resolved[1356]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 13 12:38:25.940361 containerd[1515]: time="2025-05-13T12:38:25.940303576Z" level=info msg="connecting to shim b32b0dd57638cea013d7e165f0692a3d454ef38bd1f155f344b1b81193f001cf" address="unix:///run/containerd/s/486b7bb1241f80e93f6d64a12091eabfb58577fced9d967e422b5f5abc5efd84" namespace=k8s.io protocol=ttrpc version=3 May 13 12:38:25.943089 containerd[1515]: time="2025-05-13T12:38:25.942431510Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59c5b9dc8b-hmthn,Uid:33d6eab6-5559-4c8d-ab4d-98247a898dc8,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"24ef4fed0fd0f5f28658e423f8be578bb7e553b91da51988af483fc1a0fe8286\"" May 13 12:38:25.966066 systemd-networkd[1429]: cali715eec62505: Gained IPv6LL May 13 12:38:25.967092 systemd[1]: Started cri-containerd-b32b0dd57638cea013d7e165f0692a3d454ef38bd1f155f344b1b81193f001cf.scope - libcontainer container b32b0dd57638cea013d7e165f0692a3d454ef38bd1f155f344b1b81193f001cf. May 13 12:38:25.997869 systemd-resolved[1356]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 13 12:38:26.028299 containerd[1515]: time="2025-05-13T12:38:26.028252899Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6b78c5754f-xv599,Uid:3c236d1b-5eee-40d9-9284-ae0c63951484,Namespace:calico-system,Attempt:0,} returns sandbox id \"b32b0dd57638cea013d7e165f0692a3d454ef38bd1f155f344b1b81193f001cf\"" May 13 12:38:26.732781 containerd[1515]: time="2025-05-13T12:38:26.732693289Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-kxg5d,Uid:1b99018c-e9e6-4d14-a4dc-d064c4bafd96,Namespace:kube-system,Attempt:0,}" May 13 12:38:26.733540 containerd[1515]: time="2025-05-13T12:38:26.733236833Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-q8qxk,Uid:598e01bb-31c9-4a2f-89fd-2acd7cde631f,Namespace:kube-system,Attempt:0,}" May 13 12:38:26.875740 systemd-networkd[1429]: calic993119e9df: Link UP May 13 12:38:26.876321 systemd-networkd[1429]: calic993119e9df: Gained carrier May 13 12:38:26.894684 containerd[1515]: 2025-05-13 12:38:26.778 [INFO][4522] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7db6d8ff4d--q8qxk-eth0 coredns-7db6d8ff4d- kube-system 598e01bb-31c9-4a2f-89fd-2acd7cde631f 752 0 2025-05-13 12:37:48 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7db6d8ff4d-q8qxk eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calic993119e9df [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="3d4842503ff736c37e4a8560a181432f2750bb15911db41bbe22cf0ab4dc4a96" Namespace="kube-system" Pod="coredns-7db6d8ff4d-q8qxk" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--q8qxk-" May 13 12:38:26.894684 containerd[1515]: 2025-05-13 12:38:26.778 [INFO][4522] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="3d4842503ff736c37e4a8560a181432f2750bb15911db41bbe22cf0ab4dc4a96" Namespace="kube-system" Pod="coredns-7db6d8ff4d-q8qxk" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--q8qxk-eth0" May 13 12:38:26.894684 containerd[1515]: 2025-05-13 12:38:26.807 [INFO][4537] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3d4842503ff736c37e4a8560a181432f2750bb15911db41bbe22cf0ab4dc4a96" HandleID="k8s-pod-network.3d4842503ff736c37e4a8560a181432f2750bb15911db41bbe22cf0ab4dc4a96" Workload="localhost-k8s-coredns--7db6d8ff4d--q8qxk-eth0" May 13 12:38:26.894972 containerd[1515]: 2025-05-13 12:38:26.819 [INFO][4537] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3d4842503ff736c37e4a8560a181432f2750bb15911db41bbe22cf0ab4dc4a96" HandleID="k8s-pod-network.3d4842503ff736c37e4a8560a181432f2750bb15911db41bbe22cf0ab4dc4a96" Workload="localhost-k8s-coredns--7db6d8ff4d--q8qxk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003aafd0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7db6d8ff4d-q8qxk", "timestamp":"2025-05-13 12:38:26.807547302 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 13 12:38:26.894972 containerd[1515]: 2025-05-13 12:38:26.819 [INFO][4537] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 12:38:26.894972 containerd[1515]: 2025-05-13 12:38:26.820 [INFO][4537] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 12:38:26.894972 containerd[1515]: 2025-05-13 12:38:26.820 [INFO][4537] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 13 12:38:26.894972 containerd[1515]: 2025-05-13 12:38:26.823 [INFO][4537] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.3d4842503ff736c37e4a8560a181432f2750bb15911db41bbe22cf0ab4dc4a96" host="localhost" May 13 12:38:26.894972 containerd[1515]: 2025-05-13 12:38:26.831 [INFO][4537] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 13 12:38:26.894972 containerd[1515]: 2025-05-13 12:38:26.837 [INFO][4537] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 13 12:38:26.894972 containerd[1515]: 2025-05-13 12:38:26.840 [INFO][4537] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 13 12:38:26.894972 containerd[1515]: 2025-05-13 12:38:26.842 [INFO][4537] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 13 12:38:26.894972 containerd[1515]: 2025-05-13 12:38:26.843 [INFO][4537] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3d4842503ff736c37e4a8560a181432f2750bb15911db41bbe22cf0ab4dc4a96" host="localhost" May 13 12:38:26.895161 containerd[1515]: 2025-05-13 12:38:26.844 [INFO][4537] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.3d4842503ff736c37e4a8560a181432f2750bb15911db41bbe22cf0ab4dc4a96 May 13 12:38:26.895161 containerd[1515]: 2025-05-13 12:38:26.849 [INFO][4537] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3d4842503ff736c37e4a8560a181432f2750bb15911db41bbe22cf0ab4dc4a96" host="localhost" May 13 12:38:26.895161 containerd[1515]: 2025-05-13 12:38:26.859 [INFO][4537] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.3d4842503ff736c37e4a8560a181432f2750bb15911db41bbe22cf0ab4dc4a96" host="localhost" May 13 12:38:26.895161 containerd[1515]: 2025-05-13 12:38:26.859 [INFO][4537] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.3d4842503ff736c37e4a8560a181432f2750bb15911db41bbe22cf0ab4dc4a96" host="localhost" May 13 12:38:26.895161 containerd[1515]: 2025-05-13 12:38:26.859 [INFO][4537] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 12:38:26.895161 containerd[1515]: 2025-05-13 12:38:26.859 [INFO][4537] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="3d4842503ff736c37e4a8560a181432f2750bb15911db41bbe22cf0ab4dc4a96" HandleID="k8s-pod-network.3d4842503ff736c37e4a8560a181432f2750bb15911db41bbe22cf0ab4dc4a96" Workload="localhost-k8s-coredns--7db6d8ff4d--q8qxk-eth0" May 13 12:38:26.895277 containerd[1515]: 2025-05-13 12:38:26.864 [INFO][4522] cni-plugin/k8s.go 386: Populated endpoint ContainerID="3d4842503ff736c37e4a8560a181432f2750bb15911db41bbe22cf0ab4dc4a96" Namespace="kube-system" Pod="coredns-7db6d8ff4d-q8qxk" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--q8qxk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--q8qxk-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"598e01bb-31c9-4a2f-89fd-2acd7cde631f", ResourceVersion:"752", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 12, 37, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7db6d8ff4d-q8qxk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic993119e9df", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 12:38:26.895330 containerd[1515]: 2025-05-13 12:38:26.866 [INFO][4522] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="3d4842503ff736c37e4a8560a181432f2750bb15911db41bbe22cf0ab4dc4a96" Namespace="kube-system" Pod="coredns-7db6d8ff4d-q8qxk" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--q8qxk-eth0" May 13 12:38:26.895330 containerd[1515]: 2025-05-13 12:38:26.866 [INFO][4522] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic993119e9df ContainerID="3d4842503ff736c37e4a8560a181432f2750bb15911db41bbe22cf0ab4dc4a96" Namespace="kube-system" Pod="coredns-7db6d8ff4d-q8qxk" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--q8qxk-eth0" May 13 12:38:26.895330 containerd[1515]: 2025-05-13 12:38:26.876 [INFO][4522] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3d4842503ff736c37e4a8560a181432f2750bb15911db41bbe22cf0ab4dc4a96" Namespace="kube-system" Pod="coredns-7db6d8ff4d-q8qxk" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--q8qxk-eth0" May 13 12:38:26.895389 containerd[1515]: 2025-05-13 12:38:26.878 [INFO][4522] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="3d4842503ff736c37e4a8560a181432f2750bb15911db41bbe22cf0ab4dc4a96" Namespace="kube-system" Pod="coredns-7db6d8ff4d-q8qxk" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--q8qxk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--q8qxk-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"598e01bb-31c9-4a2f-89fd-2acd7cde631f", ResourceVersion:"752", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 12, 37, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3d4842503ff736c37e4a8560a181432f2750bb15911db41bbe22cf0ab4dc4a96", Pod:"coredns-7db6d8ff4d-q8qxk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic993119e9df", MAC:"0a:2b:09:b9:b1:f6", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 12:38:26.895389 containerd[1515]: 2025-05-13 12:38:26.887 [INFO][4522] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="3d4842503ff736c37e4a8560a181432f2750bb15911db41bbe22cf0ab4dc4a96" Namespace="kube-system" Pod="coredns-7db6d8ff4d-q8qxk" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--q8qxk-eth0" May 13 12:38:26.919548 systemd-networkd[1429]: cali4c09847b772: Link UP May 13 12:38:26.920464 systemd-networkd[1429]: cali4c09847b772: Gained carrier May 13 12:38:26.939281 containerd[1515]: 2025-05-13 12:38:26.775 [INFO][4507] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7db6d8ff4d--kxg5d-eth0 coredns-7db6d8ff4d- kube-system 1b99018c-e9e6-4d14-a4dc-d064c4bafd96 755 0 2025-05-13 12:37:48 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7db6d8ff4d-kxg5d eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali4c09847b772 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="6421efe3584ac179cbb659541cc89e4c7ef61350fbcd333f26336b7860e93fba" Namespace="kube-system" Pod="coredns-7db6d8ff4d-kxg5d" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--kxg5d-" May 13 12:38:26.939281 containerd[1515]: 2025-05-13 12:38:26.776 [INFO][4507] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="6421efe3584ac179cbb659541cc89e4c7ef61350fbcd333f26336b7860e93fba" Namespace="kube-system" Pod="coredns-7db6d8ff4d-kxg5d" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--kxg5d-eth0" May 13 12:38:26.939281 containerd[1515]: 2025-05-13 12:38:26.814 [INFO][4535] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6421efe3584ac179cbb659541cc89e4c7ef61350fbcd333f26336b7860e93fba" HandleID="k8s-pod-network.6421efe3584ac179cbb659541cc89e4c7ef61350fbcd333f26336b7860e93fba" Workload="localhost-k8s-coredns--7db6d8ff4d--kxg5d-eth0" May 13 12:38:26.939281 containerd[1515]: 2025-05-13 12:38:26.828 [INFO][4535] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6421efe3584ac179cbb659541cc89e4c7ef61350fbcd333f26336b7860e93fba" HandleID="k8s-pod-network.6421efe3584ac179cbb659541cc89e4c7ef61350fbcd333f26336b7860e93fba" Workload="localhost-k8s-coredns--7db6d8ff4d--kxg5d-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000435500), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7db6d8ff4d-kxg5d", "timestamp":"2025-05-13 12:38:26.814622765 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 13 12:38:26.939281 containerd[1515]: 2025-05-13 12:38:26.828 [INFO][4535] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 12:38:26.939281 containerd[1515]: 2025-05-13 12:38:26.859 [INFO][4535] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 12:38:26.939281 containerd[1515]: 2025-05-13 12:38:26.859 [INFO][4535] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 13 12:38:26.939281 containerd[1515]: 2025-05-13 12:38:26.861 [INFO][4535] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.6421efe3584ac179cbb659541cc89e4c7ef61350fbcd333f26336b7860e93fba" host="localhost" May 13 12:38:26.939281 containerd[1515]: 2025-05-13 12:38:26.875 [INFO][4535] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 13 12:38:26.939281 containerd[1515]: 2025-05-13 12:38:26.885 [INFO][4535] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 13 12:38:26.939281 containerd[1515]: 2025-05-13 12:38:26.889 [INFO][4535] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 13 12:38:26.939281 containerd[1515]: 2025-05-13 12:38:26.896 [INFO][4535] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 13 12:38:26.939281 containerd[1515]: 2025-05-13 12:38:26.896 [INFO][4535] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.6421efe3584ac179cbb659541cc89e4c7ef61350fbcd333f26336b7860e93fba" host="localhost" May 13 12:38:26.939281 containerd[1515]: 2025-05-13 12:38:26.900 [INFO][4535] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.6421efe3584ac179cbb659541cc89e4c7ef61350fbcd333f26336b7860e93fba May 13 12:38:26.939281 containerd[1515]: 2025-05-13 12:38:26.906 [INFO][4535] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.6421efe3584ac179cbb659541cc89e4c7ef61350fbcd333f26336b7860e93fba" host="localhost" May 13 12:38:26.939281 containerd[1515]: 2025-05-13 12:38:26.912 [INFO][4535] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.6421efe3584ac179cbb659541cc89e4c7ef61350fbcd333f26336b7860e93fba" host="localhost" May 13 12:38:26.939281 containerd[1515]: 2025-05-13 12:38:26.912 [INFO][4535] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.6421efe3584ac179cbb659541cc89e4c7ef61350fbcd333f26336b7860e93fba" host="localhost" May 13 12:38:26.939281 containerd[1515]: 2025-05-13 12:38:26.912 [INFO][4535] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 12:38:26.939281 containerd[1515]: 2025-05-13 12:38:26.912 [INFO][4535] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="6421efe3584ac179cbb659541cc89e4c7ef61350fbcd333f26336b7860e93fba" HandleID="k8s-pod-network.6421efe3584ac179cbb659541cc89e4c7ef61350fbcd333f26336b7860e93fba" Workload="localhost-k8s-coredns--7db6d8ff4d--kxg5d-eth0" May 13 12:38:26.940303 containerd[1515]: 2025-05-13 12:38:26.915 [INFO][4507] cni-plugin/k8s.go 386: Populated endpoint ContainerID="6421efe3584ac179cbb659541cc89e4c7ef61350fbcd333f26336b7860e93fba" Namespace="kube-system" Pod="coredns-7db6d8ff4d-kxg5d" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--kxg5d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--kxg5d-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"1b99018c-e9e6-4d14-a4dc-d064c4bafd96", ResourceVersion:"755", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 12, 37, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7db6d8ff4d-kxg5d", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4c09847b772", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 12:38:26.940303 containerd[1515]: 2025-05-13 12:38:26.915 [INFO][4507] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.134/32] ContainerID="6421efe3584ac179cbb659541cc89e4c7ef61350fbcd333f26336b7860e93fba" Namespace="kube-system" Pod="coredns-7db6d8ff4d-kxg5d" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--kxg5d-eth0" May 13 12:38:26.940303 containerd[1515]: 2025-05-13 12:38:26.915 [INFO][4507] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4c09847b772 ContainerID="6421efe3584ac179cbb659541cc89e4c7ef61350fbcd333f26336b7860e93fba" Namespace="kube-system" Pod="coredns-7db6d8ff4d-kxg5d" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--kxg5d-eth0" May 13 12:38:26.940303 containerd[1515]: 2025-05-13 12:38:26.920 [INFO][4507] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6421efe3584ac179cbb659541cc89e4c7ef61350fbcd333f26336b7860e93fba" Namespace="kube-system" Pod="coredns-7db6d8ff4d-kxg5d" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--kxg5d-eth0" May 13 12:38:26.940303 containerd[1515]: 2025-05-13 12:38:26.921 [INFO][4507] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="6421efe3584ac179cbb659541cc89e4c7ef61350fbcd333f26336b7860e93fba" Namespace="kube-system" Pod="coredns-7db6d8ff4d-kxg5d" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--kxg5d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--kxg5d-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"1b99018c-e9e6-4d14-a4dc-d064c4bafd96", ResourceVersion:"755", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 12, 37, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6421efe3584ac179cbb659541cc89e4c7ef61350fbcd333f26336b7860e93fba", Pod:"coredns-7db6d8ff4d-kxg5d", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4c09847b772", MAC:"f2:a6:77:bf:53:f7", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 12:38:26.940303 containerd[1515]: 2025-05-13 12:38:26.936 [INFO][4507] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="6421efe3584ac179cbb659541cc89e4c7ef61350fbcd333f26336b7860e93fba" Namespace="kube-system" Pod="coredns-7db6d8ff4d-kxg5d" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--kxg5d-eth0" May 13 12:38:26.963135 containerd[1515]: time="2025-05-13T12:38:26.963079736Z" level=info msg="connecting to shim 6421efe3584ac179cbb659541cc89e4c7ef61350fbcd333f26336b7860e93fba" address="unix:///run/containerd/s/c1fe05b72a9db1feddf92d57742c2367c2825c83ed50baf15ebaee3cec55f7c8" namespace=k8s.io protocol=ttrpc version=3 May 13 12:38:26.966515 containerd[1515]: time="2025-05-13T12:38:26.966473162Z" level=info msg="connecting to shim 3d4842503ff736c37e4a8560a181432f2750bb15911db41bbe22cf0ab4dc4a96" address="unix:///run/containerd/s/694b847a0521edb1479914f07a048925edae9ce94304f7fef2fe1713bbec5b96" namespace=k8s.io protocol=ttrpc version=3 May 13 12:38:26.991052 systemd[1]: Started cri-containerd-3d4842503ff736c37e4a8560a181432f2750bb15911db41bbe22cf0ab4dc4a96.scope - libcontainer container 3d4842503ff736c37e4a8560a181432f2750bb15911db41bbe22cf0ab4dc4a96. May 13 12:38:26.994860 systemd[1]: Started cri-containerd-6421efe3584ac179cbb659541cc89e4c7ef61350fbcd333f26336b7860e93fba.scope - libcontainer container 6421efe3584ac179cbb659541cc89e4c7ef61350fbcd333f26336b7860e93fba. May 13 12:38:27.005718 systemd-resolved[1356]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 13 12:38:27.006960 systemd-resolved[1356]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 13 12:38:27.028095 containerd[1515]: time="2025-05-13T12:38:27.028052535Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-q8qxk,Uid:598e01bb-31c9-4a2f-89fd-2acd7cde631f,Namespace:kube-system,Attempt:0,} returns sandbox id \"3d4842503ff736c37e4a8560a181432f2750bb15911db41bbe22cf0ab4dc4a96\"" May 13 12:38:27.031322 containerd[1515]: time="2025-05-13T12:38:27.031284670Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-kxg5d,Uid:1b99018c-e9e6-4d14-a4dc-d064c4bafd96,Namespace:kube-system,Attempt:0,} returns sandbox id \"6421efe3584ac179cbb659541cc89e4c7ef61350fbcd333f26336b7860e93fba\"" May 13 12:38:27.032872 containerd[1515]: time="2025-05-13T12:38:27.032797053Z" level=info msg="CreateContainer within sandbox \"3d4842503ff736c37e4a8560a181432f2750bb15911db41bbe22cf0ab4dc4a96\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 13 12:38:27.035852 containerd[1515]: time="2025-05-13T12:38:27.035813779Z" level=info msg="CreateContainer within sandbox \"6421efe3584ac179cbb659541cc89e4c7ef61350fbcd333f26336b7860e93fba\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 13 12:38:27.040699 containerd[1515]: time="2025-05-13T12:38:27.040664822Z" level=info msg="Container 6f25d081041f545d1e27494e11f362d8528cbc6f39057b873d13f9c16cce400e: CDI devices from CRI Config.CDIDevices: []" May 13 12:38:27.043533 containerd[1515]: time="2025-05-13T12:38:27.043474420Z" level=info msg="Container 37e6ee05eb0c7f9c35d1dd515f9a69c0bda7e78d55f39ea874205639ef8d07e6: CDI devices from CRI Config.CDIDevices: []" May 13 12:38:27.046156 containerd[1515]: time="2025-05-13T12:38:27.046108850Z" level=info msg="CreateContainer within sandbox \"3d4842503ff736c37e4a8560a181432f2750bb15911db41bbe22cf0ab4dc4a96\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6f25d081041f545d1e27494e11f362d8528cbc6f39057b873d13f9c16cce400e\"" May 13 12:38:27.046818 containerd[1515]: time="2025-05-13T12:38:27.046708715Z" level=info msg="StartContainer for \"6f25d081041f545d1e27494e11f362d8528cbc6f39057b873d13f9c16cce400e\"" May 13 12:38:27.048480 containerd[1515]: time="2025-05-13T12:38:27.048386305Z" level=info msg="connecting to shim 6f25d081041f545d1e27494e11f362d8528cbc6f39057b873d13f9c16cce400e" address="unix:///run/containerd/s/694b847a0521edb1479914f07a048925edae9ce94304f7fef2fe1713bbec5b96" protocol=ttrpc version=3 May 13 12:38:27.050378 containerd[1515]: time="2025-05-13T12:38:27.050344987Z" level=info msg="CreateContainer within sandbox \"6421efe3584ac179cbb659541cc89e4c7ef61350fbcd333f26336b7860e93fba\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"37e6ee05eb0c7f9c35d1dd515f9a69c0bda7e78d55f39ea874205639ef8d07e6\"" May 13 12:38:27.051122 containerd[1515]: time="2025-05-13T12:38:27.051043696Z" level=info msg="StartContainer for \"37e6ee05eb0c7f9c35d1dd515f9a69c0bda7e78d55f39ea874205639ef8d07e6\"" May 13 12:38:27.051832 containerd[1515]: time="2025-05-13T12:38:27.051801448Z" level=info msg="connecting to shim 37e6ee05eb0c7f9c35d1dd515f9a69c0bda7e78d55f39ea874205639ef8d07e6" address="unix:///run/containerd/s/c1fe05b72a9db1feddf92d57742c2367c2825c83ed50baf15ebaee3cec55f7c8" protocol=ttrpc version=3 May 13 12:38:27.077053 systemd[1]: Started cri-containerd-37e6ee05eb0c7f9c35d1dd515f9a69c0bda7e78d55f39ea874205639ef8d07e6.scope - libcontainer container 37e6ee05eb0c7f9c35d1dd515f9a69c0bda7e78d55f39ea874205639ef8d07e6. May 13 12:38:27.078155 systemd[1]: Started cri-containerd-6f25d081041f545d1e27494e11f362d8528cbc6f39057b873d13f9c16cce400e.scope - libcontainer container 6f25d081041f545d1e27494e11f362d8528cbc6f39057b873d13f9c16cce400e. May 13 12:38:27.109635 containerd[1515]: time="2025-05-13T12:38:27.109596065Z" level=info msg="StartContainer for \"6f25d081041f545d1e27494e11f362d8528cbc6f39057b873d13f9c16cce400e\" returns successfully" May 13 12:38:27.112895 containerd[1515]: time="2025-05-13T12:38:27.112782878Z" level=info msg="StartContainer for \"37e6ee05eb0c7f9c35d1dd515f9a69c0bda7e78d55f39ea874205639ef8d07e6\" returns successfully" May 13 12:38:27.757080 systemd-networkd[1429]: cali8816a5efffc: Gained IPv6LL May 13 12:38:27.821033 systemd-networkd[1429]: calia3bc0f074a9: Gained IPv6LL May 13 12:38:27.919447 kubelet[2762]: I0513 12:38:27.919309 2762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-kxg5d" podStartSLOduration=39.919293849 podStartE2EDuration="39.919293849s" podCreationTimestamp="2025-05-13 12:37:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 12:38:27.9186079 +0000 UTC m=+55.264438346" watchObservedRunningTime="2025-05-13 12:38:27.919293849 +0000 UTC m=+55.265124295" May 13 12:38:27.934593 kubelet[2762]: I0513 12:38:27.934509 2762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-q8qxk" podStartSLOduration=39.934478764 podStartE2EDuration="39.934478764s" podCreationTimestamp="2025-05-13 12:37:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 12:38:27.93342096 +0000 UTC m=+55.279251406" watchObservedRunningTime="2025-05-13 12:38:27.934478764 +0000 UTC m=+55.280309210" May 13 12:38:28.269136 systemd-networkd[1429]: cali4c09847b772: Gained IPv6LL May 13 12:38:28.397990 systemd-networkd[1429]: calic993119e9df: Gained IPv6LL May 13 12:38:29.520280 systemd[1]: Started sshd@14-10.0.0.46:22-10.0.0.1:47866.service - OpenSSH per-connection server daemon (10.0.0.1:47866). May 13 12:38:29.573221 sshd[4760]: Accepted publickey for core from 10.0.0.1 port 47866 ssh2: RSA SHA256:HV7SwMkgpUcGbG5PTBCNGAhaEvexdMAt2yN/TIbGAFk May 13 12:38:29.574467 sshd-session[4760]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:38:29.578672 systemd-logind[1485]: New session 15 of user core. May 13 12:38:29.595084 systemd[1]: Started session-15.scope - Session 15 of User core. May 13 12:38:29.716422 sshd[4762]: Connection closed by 10.0.0.1 port 47866 May 13 12:38:29.716735 sshd-session[4760]: pam_unix(sshd:session): session closed for user core May 13 12:38:29.720516 systemd[1]: sshd@14-10.0.0.46:22-10.0.0.1:47866.service: Deactivated successfully. May 13 12:38:29.722469 systemd[1]: session-15.scope: Deactivated successfully. May 13 12:38:29.723134 systemd-logind[1485]: Session 15 logged out. Waiting for processes to exit. May 13 12:38:29.724771 systemd-logind[1485]: Removed session 15. May 13 12:38:30.582746 containerd[1515]: time="2025-05-13T12:38:30.582518584Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:38:30.583532 containerd[1515]: time="2025-05-13T12:38:30.583226692Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=40247603" May 13 12:38:30.584344 containerd[1515]: time="2025-05-13T12:38:30.584292853Z" level=info msg="ImageCreate event name:\"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:38:30.592037 containerd[1515]: time="2025-05-13T12:38:30.589986914Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:38:30.592606 containerd[1515]: time="2025-05-13T12:38:30.590729222Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"41616801\" in 7.365965402s" May 13 12:38:30.592705 containerd[1515]: time="2025-05-13T12:38:30.592688418Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\"" May 13 12:38:30.593708 containerd[1515]: time="2025-05-13T12:38:30.593671296Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\"" May 13 12:38:30.595969 containerd[1515]: time="2025-05-13T12:38:30.595940624Z" level=info msg="CreateContainer within sandbox \"caa70eff6b0e66142335fe5bac6b95f0939760b686c6523dd31992712242de1d\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 13 12:38:30.602278 containerd[1515]: time="2025-05-13T12:38:30.601456638Z" level=info msg="Container dc4f9dea0af664d6ad87ff943dc80413d748b9e04b3a5aefd6826371c5b24b46: CDI devices from CRI Config.CDIDevices: []" May 13 12:38:30.606172 containerd[1515]: time="2025-05-13T12:38:30.606134739Z" level=info msg="CreateContainer within sandbox \"caa70eff6b0e66142335fe5bac6b95f0939760b686c6523dd31992712242de1d\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"dc4f9dea0af664d6ad87ff943dc80413d748b9e04b3a5aefd6826371c5b24b46\"" May 13 12:38:30.606557 containerd[1515]: time="2025-05-13T12:38:30.606529355Z" level=info msg="StartContainer for \"dc4f9dea0af664d6ad87ff943dc80413d748b9e04b3a5aefd6826371c5b24b46\"" May 13 12:38:30.607716 containerd[1515]: time="2025-05-13T12:38:30.607649198Z" level=info msg="connecting to shim dc4f9dea0af664d6ad87ff943dc80413d748b9e04b3a5aefd6826371c5b24b46" address="unix:///run/containerd/s/906fd11ff687b0c9dea372aa071d22d6b206d21a8a7bb0c2d3cedb6d48a8b9d0" protocol=ttrpc version=3 May 13 12:38:30.628039 systemd[1]: Started cri-containerd-dc4f9dea0af664d6ad87ff943dc80413d748b9e04b3a5aefd6826371c5b24b46.scope - libcontainer container dc4f9dea0af664d6ad87ff943dc80413d748b9e04b3a5aefd6826371c5b24b46. May 13 12:38:30.664021 containerd[1515]: time="2025-05-13T12:38:30.663799414Z" level=info msg="StartContainer for \"dc4f9dea0af664d6ad87ff943dc80413d748b9e04b3a5aefd6826371c5b24b46\" returns successfully" May 13 12:38:30.927920 kubelet[2762]: I0513 12:38:30.927431 2762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-59c5b9dc8b-hp2wg" podStartSLOduration=28.558401746 podStartE2EDuration="35.927413028s" podCreationTimestamp="2025-05-13 12:37:55 +0000 UTC" firstStartedPulling="2025-05-13 12:38:23.224496448 +0000 UTC m=+50.570326894" lastFinishedPulling="2025-05-13 12:38:30.59350773 +0000 UTC m=+57.939338176" observedRunningTime="2025-05-13 12:38:30.92617102 +0000 UTC m=+58.272001466" watchObservedRunningTime="2025-05-13 12:38:30.927413028 +0000 UTC m=+58.273243474" May 13 12:38:31.924164 kubelet[2762]: I0513 12:38:31.924104 2762 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 13 12:38:33.995829 containerd[1515]: time="2025-05-13T12:38:33.995787449Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:38:33.996643 containerd[1515]: time="2025-05-13T12:38:33.996405151Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.3: active requests=0, bytes read=7474935" May 13 12:38:33.997379 containerd[1515]: time="2025-05-13T12:38:33.997343425Z" level=info msg="ImageCreate event name:\"sha256:15faf29e8b518d846c91c15785ff89e783d356ea0f2b22826f47a556ea32645b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:38:33.999012 containerd[1515]: time="2025-05-13T12:38:33.998967043Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:38:33.999546 containerd[1515]: time="2025-05-13T12:38:33.999521303Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.3\" with image id \"sha256:15faf29e8b518d846c91c15785ff89e783d356ea0f2b22826f47a556ea32645b\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\", size \"8844117\" in 3.405818686s" May 13 12:38:33.999590 containerd[1515]: time="2025-05-13T12:38:33.999552304Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\" returns image reference \"sha256:15faf29e8b518d846c91c15785ff89e783d356ea0f2b22826f47a556ea32645b\"" May 13 12:38:34.001449 containerd[1515]: time="2025-05-13T12:38:34.001424571Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" May 13 12:38:34.002298 containerd[1515]: time="2025-05-13T12:38:34.002262521Z" level=info msg="CreateContainer within sandbox \"a304f4d674fe581e1ccd5aba67d62e5564b42d841fd1b3ae2099aee586e96468\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" May 13 12:38:34.010902 containerd[1515]: time="2025-05-13T12:38:34.010860662Z" level=info msg="Container 47c8f61598ac722e6c35b92c43d08a76df5cca29714044789342b5f27586b40f: CDI devices from CRI Config.CDIDevices: []" May 13 12:38:34.017489 containerd[1515]: time="2025-05-13T12:38:34.017446453Z" level=info msg="CreateContainer within sandbox \"a304f4d674fe581e1ccd5aba67d62e5564b42d841fd1b3ae2099aee586e96468\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"47c8f61598ac722e6c35b92c43d08a76df5cca29714044789342b5f27586b40f\"" May 13 12:38:34.018573 containerd[1515]: time="2025-05-13T12:38:34.018027874Z" level=info msg="StartContainer for \"47c8f61598ac722e6c35b92c43d08a76df5cca29714044789342b5f27586b40f\"" May 13 12:38:34.019772 containerd[1515]: time="2025-05-13T12:38:34.019737494Z" level=info msg="connecting to shim 47c8f61598ac722e6c35b92c43d08a76df5cca29714044789342b5f27586b40f" address="unix:///run/containerd/s/55beb3c7578776b4783457ebb78ac1fe7e5d8dea0db5f39529d33b95ea2f6766" protocol=ttrpc version=3 May 13 12:38:34.040055 systemd[1]: Started cri-containerd-47c8f61598ac722e6c35b92c43d08a76df5cca29714044789342b5f27586b40f.scope - libcontainer container 47c8f61598ac722e6c35b92c43d08a76df5cca29714044789342b5f27586b40f. May 13 12:38:34.078008 containerd[1515]: time="2025-05-13T12:38:34.077536321Z" level=info msg="StartContainer for \"47c8f61598ac722e6c35b92c43d08a76df5cca29714044789342b5f27586b40f\" returns successfully" May 13 12:38:34.737612 systemd[1]: Started sshd@15-10.0.0.46:22-10.0.0.1:43826.service - OpenSSH per-connection server daemon (10.0.0.1:43826). May 13 12:38:34.797466 sshd[4866]: Accepted publickey for core from 10.0.0.1 port 43826 ssh2: RSA SHA256:HV7SwMkgpUcGbG5PTBCNGAhaEvexdMAt2yN/TIbGAFk May 13 12:38:34.798777 sshd-session[4866]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:38:34.803030 systemd-logind[1485]: New session 16 of user core. May 13 12:38:34.809030 systemd[1]: Started session-16.scope - Session 16 of User core. May 13 12:38:34.943606 sshd[4868]: Connection closed by 10.0.0.1 port 43826 May 13 12:38:34.944086 sshd-session[4866]: pam_unix(sshd:session): session closed for user core May 13 12:38:34.947429 systemd[1]: sshd@15-10.0.0.46:22-10.0.0.1:43826.service: Deactivated successfully. May 13 12:38:34.949069 systemd[1]: session-16.scope: Deactivated successfully. May 13 12:38:34.949801 systemd-logind[1485]: Session 16 logged out. Waiting for processes to exit. May 13 12:38:34.950804 systemd-logind[1485]: Removed session 16. May 13 12:38:35.494850 containerd[1515]: time="2025-05-13T12:38:35.494799100Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:38:35.495353 containerd[1515]: time="2025-05-13T12:38:35.495180473Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=77" May 13 12:38:35.497249 containerd[1515]: time="2025-05-13T12:38:35.497212702Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"41616801\" in 1.495670007s" May 13 12:38:35.497249 containerd[1515]: time="2025-05-13T12:38:35.497246304Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\"" May 13 12:38:35.499280 containerd[1515]: time="2025-05-13T12:38:35.499241732Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\"" May 13 12:38:35.499358 containerd[1515]: time="2025-05-13T12:38:35.499308334Z" level=info msg="CreateContainer within sandbox \"24ef4fed0fd0f5f28658e423f8be578bb7e553b91da51988af483fc1a0fe8286\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 13 12:38:35.509265 containerd[1515]: time="2025-05-13T12:38:35.508692615Z" level=info msg="Container eb9254b1a5064de77fbffe904d0224ee9ece1caf07d45a82b80ad2830a2daad9: CDI devices from CRI Config.CDIDevices: []" May 13 12:38:35.514939 containerd[1515]: time="2025-05-13T12:38:35.514881667Z" level=info msg="CreateContainer within sandbox \"24ef4fed0fd0f5f28658e423f8be578bb7e553b91da51988af483fc1a0fe8286\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"eb9254b1a5064de77fbffe904d0224ee9ece1caf07d45a82b80ad2830a2daad9\"" May 13 12:38:35.516427 containerd[1515]: time="2025-05-13T12:38:35.516395759Z" level=info msg="StartContainer for \"eb9254b1a5064de77fbffe904d0224ee9ece1caf07d45a82b80ad2830a2daad9\"" May 13 12:38:35.517611 containerd[1515]: time="2025-05-13T12:38:35.517581720Z" level=info msg="connecting to shim eb9254b1a5064de77fbffe904d0224ee9ece1caf07d45a82b80ad2830a2daad9" address="unix:///run/containerd/s/3a547facf60b3ba90804d3da0291b0bd3b3d0bf1633a9c760cb69759cb4c1d34" protocol=ttrpc version=3 May 13 12:38:35.540052 systemd[1]: Started cri-containerd-eb9254b1a5064de77fbffe904d0224ee9ece1caf07d45a82b80ad2830a2daad9.scope - libcontainer container eb9254b1a5064de77fbffe904d0224ee9ece1caf07d45a82b80ad2830a2daad9. May 13 12:38:35.574644 containerd[1515]: time="2025-05-13T12:38:35.574607072Z" level=info msg="StartContainer for \"eb9254b1a5064de77fbffe904d0224ee9ece1caf07d45a82b80ad2830a2daad9\" returns successfully" May 13 12:38:35.943587 kubelet[2762]: I0513 12:38:35.943523 2762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-59c5b9dc8b-hmthn" podStartSLOduration=31.391310349 podStartE2EDuration="40.94350606s" podCreationTimestamp="2025-05-13 12:37:55 +0000 UTC" firstStartedPulling="2025-05-13 12:38:25.945777937 +0000 UTC m=+53.291608383" lastFinishedPulling="2025-05-13 12:38:35.497973648 +0000 UTC m=+62.843804094" observedRunningTime="2025-05-13 12:38:35.942738034 +0000 UTC m=+63.288568480" watchObservedRunningTime="2025-05-13 12:38:35.94350606 +0000 UTC m=+63.289336506" May 13 12:38:39.959200 systemd[1]: Started sshd@16-10.0.0.46:22-10.0.0.1:43834.service - OpenSSH per-connection server daemon (10.0.0.1:43834). May 13 12:38:40.023787 sshd[4919]: Accepted publickey for core from 10.0.0.1 port 43834 ssh2: RSA SHA256:HV7SwMkgpUcGbG5PTBCNGAhaEvexdMAt2yN/TIbGAFk May 13 12:38:40.025237 sshd-session[4919]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:38:40.029434 systemd-logind[1485]: New session 17 of user core. May 13 12:38:40.046067 systemd[1]: Started session-17.scope - Session 17 of User core. May 13 12:38:40.209171 sshd[4921]: Connection closed by 10.0.0.1 port 43834 May 13 12:38:40.208676 sshd-session[4919]: pam_unix(sshd:session): session closed for user core May 13 12:38:40.212782 systemd-logind[1485]: Session 17 logged out. Waiting for processes to exit. May 13 12:38:40.213026 systemd[1]: sshd@16-10.0.0.46:22-10.0.0.1:43834.service: Deactivated successfully. May 13 12:38:40.214636 systemd[1]: session-17.scope: Deactivated successfully. May 13 12:38:40.215740 systemd-logind[1485]: Removed session 17. May 13 12:38:41.124091 containerd[1515]: time="2025-05-13T12:38:41.124040761Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:38:41.124667 containerd[1515]: time="2025-05-13T12:38:41.124627099Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.3: active requests=0, bytes read=32554116" May 13 12:38:41.125221 containerd[1515]: time="2025-05-13T12:38:41.125190596Z" level=info msg="ImageCreate event name:\"sha256:ec7c64189a2fd01b24b044fea1840d441e9884a0df32c2e9d6982cfbbea1f814\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:38:41.127293 containerd[1515]: time="2025-05-13T12:38:41.127255137Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:38:41.127833 containerd[1515]: time="2025-05-13T12:38:41.127800793Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" with image id \"sha256:ec7c64189a2fd01b24b044fea1840d441e9884a0df32c2e9d6982cfbbea1f814\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\", size \"33923266\" in 5.6285259s" May 13 12:38:41.127866 containerd[1515]: time="2025-05-13T12:38:41.127831354Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" returns image reference \"sha256:ec7c64189a2fd01b24b044fea1840d441e9884a0df32c2e9d6982cfbbea1f814\"" May 13 12:38:41.129641 containerd[1515]: time="2025-05-13T12:38:41.129617247Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\"" May 13 12:38:41.135696 containerd[1515]: time="2025-05-13T12:38:41.135660426Z" level=info msg="CreateContainer within sandbox \"b32b0dd57638cea013d7e165f0692a3d454ef38bd1f155f344b1b81193f001cf\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" May 13 12:38:41.143788 containerd[1515]: time="2025-05-13T12:38:41.143741946Z" level=info msg="Container f45aa44056106325be27617bdea1bb83f8aeb60d3cc353525b57cc221935e859: CDI devices from CRI Config.CDIDevices: []" May 13 12:38:41.150008 containerd[1515]: time="2025-05-13T12:38:41.149969931Z" level=info msg="CreateContainer within sandbox \"b32b0dd57638cea013d7e165f0692a3d454ef38bd1f155f344b1b81193f001cf\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"f45aa44056106325be27617bdea1bb83f8aeb60d3cc353525b57cc221935e859\"" May 13 12:38:41.150689 containerd[1515]: time="2025-05-13T12:38:41.150578429Z" level=info msg="StartContainer for \"f45aa44056106325be27617bdea1bb83f8aeb60d3cc353525b57cc221935e859\"" May 13 12:38:41.151874 containerd[1515]: time="2025-05-13T12:38:41.151847867Z" level=info msg="connecting to shim f45aa44056106325be27617bdea1bb83f8aeb60d3cc353525b57cc221935e859" address="unix:///run/containerd/s/486b7bb1241f80e93f6d64a12091eabfb58577fced9d967e422b5f5abc5efd84" protocol=ttrpc version=3 May 13 12:38:41.174060 systemd[1]: Started cri-containerd-f45aa44056106325be27617bdea1bb83f8aeb60d3cc353525b57cc221935e859.scope - libcontainer container f45aa44056106325be27617bdea1bb83f8aeb60d3cc353525b57cc221935e859. May 13 12:38:41.207158 containerd[1515]: time="2025-05-13T12:38:41.206310643Z" level=info msg="StartContainer for \"f45aa44056106325be27617bdea1bb83f8aeb60d3cc353525b57cc221935e859\" returns successfully" May 13 12:38:41.961935 kubelet[2762]: I0513 12:38:41.961566 2762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6b78c5754f-xv599" podStartSLOduration=31.862312355 podStartE2EDuration="46.96126909s" podCreationTimestamp="2025-05-13 12:37:55 +0000 UTC" firstStartedPulling="2025-05-13 12:38:26.029870408 +0000 UTC m=+53.375700854" lastFinishedPulling="2025-05-13 12:38:41.128827143 +0000 UTC m=+68.474657589" observedRunningTime="2025-05-13 12:38:41.960718554 +0000 UTC m=+69.306549040" watchObservedRunningTime="2025-05-13 12:38:41.96126909 +0000 UTC m=+69.307099536" May 13 12:38:41.988753 containerd[1515]: time="2025-05-13T12:38:41.988714584Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f45aa44056106325be27617bdea1bb83f8aeb60d3cc353525b57cc221935e859\" id:\"97c65fe8f7bd59e655ffbfbf4ec16e672c841b717268362fb74cc64d10655c2f\" pid:4989 exited_at:{seconds:1747139921 nanos:988351054}" May 13 12:38:42.110148 containerd[1515]: time="2025-05-13T12:38:42.110101433Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f45aa44056106325be27617bdea1bb83f8aeb60d3cc353525b57cc221935e859\" id:\"9565cedc5498588e6192df4269a8bd717309ca757d1c17b50463c45815d89ef1\" pid:5011 exited_at:{seconds:1747139922 nanos:109864146}" May 13 12:38:43.138870 containerd[1515]: time="2025-05-13T12:38:43.138506209Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:38:43.139269 containerd[1515]: time="2025-05-13T12:38:43.139171068Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3: active requests=0, bytes read=13124299" May 13 12:38:43.139800 containerd[1515]: time="2025-05-13T12:38:43.139777845Z" level=info msg="ImageCreate event name:\"sha256:a91b1f00752edc175f270a01b33683fa80818734aa2274388785eaf3364315dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:38:43.141675 containerd[1515]: time="2025-05-13T12:38:43.141645138Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:38:43.142286 containerd[1515]: time="2025-05-13T12:38:43.142260315Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" with image id \"sha256:a91b1f00752edc175f270a01b33683fa80818734aa2274388785eaf3364315dc\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\", size \"14493433\" in 2.012614867s" May 13 12:38:43.142335 containerd[1515]: time="2025-05-13T12:38:43.142289716Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" returns image reference \"sha256:a91b1f00752edc175f270a01b33683fa80818734aa2274388785eaf3364315dc\"" May 13 12:38:43.146305 containerd[1515]: time="2025-05-13T12:38:43.146185787Z" level=info msg="CreateContainer within sandbox \"a304f4d674fe581e1ccd5aba67d62e5564b42d841fd1b3ae2099aee586e96468\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" May 13 12:38:43.153917 containerd[1515]: time="2025-05-13T12:38:43.153094463Z" level=info msg="Container b7d11efc7eac1258f52eb82bd12426a5104619521cf1f6bcf796814f86585c2e: CDI devices from CRI Config.CDIDevices: []" May 13 12:38:43.161005 containerd[1515]: time="2025-05-13T12:38:43.160965406Z" level=info msg="CreateContainer within sandbox \"a304f4d674fe581e1ccd5aba67d62e5564b42d841fd1b3ae2099aee586e96468\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"b7d11efc7eac1258f52eb82bd12426a5104619521cf1f6bcf796814f86585c2e\"" May 13 12:38:43.162690 containerd[1515]: time="2025-05-13T12:38:43.161592903Z" level=info msg="StartContainer for \"b7d11efc7eac1258f52eb82bd12426a5104619521cf1f6bcf796814f86585c2e\"" May 13 12:38:43.163062 containerd[1515]: time="2025-05-13T12:38:43.163037184Z" level=info msg="connecting to shim b7d11efc7eac1258f52eb82bd12426a5104619521cf1f6bcf796814f86585c2e" address="unix:///run/containerd/s/55beb3c7578776b4783457ebb78ac1fe7e5d8dea0db5f39529d33b95ea2f6766" protocol=ttrpc version=3 May 13 12:38:43.183114 systemd[1]: Started cri-containerd-b7d11efc7eac1258f52eb82bd12426a5104619521cf1f6bcf796814f86585c2e.scope - libcontainer container b7d11efc7eac1258f52eb82bd12426a5104619521cf1f6bcf796814f86585c2e. May 13 12:38:43.215555 containerd[1515]: time="2025-05-13T12:38:43.215508912Z" level=info msg="StartContainer for \"b7d11efc7eac1258f52eb82bd12426a5104619521cf1f6bcf796814f86585c2e\" returns successfully" May 13 12:38:43.838814 kubelet[2762]: I0513 12:38:43.838627 2762 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 May 13 12:38:43.838814 kubelet[2762]: I0513 12:38:43.838699 2762 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock May 13 12:38:43.973311 kubelet[2762]: I0513 12:38:43.973074 2762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-sjhqz" podStartSLOduration=30.761337786 podStartE2EDuration="48.973059784s" podCreationTimestamp="2025-05-13 12:37:55 +0000 UTC" firstStartedPulling="2025-05-13 12:38:24.931495705 +0000 UTC m=+52.277326111" lastFinishedPulling="2025-05-13 12:38:43.143217663 +0000 UTC m=+70.489048109" observedRunningTime="2025-05-13 12:38:43.972007834 +0000 UTC m=+71.317838360" watchObservedRunningTime="2025-05-13 12:38:43.973059784 +0000 UTC m=+71.318890230" May 13 12:38:45.228375 systemd[1]: Started sshd@17-10.0.0.46:22-10.0.0.1:40914.service - OpenSSH per-connection server daemon (10.0.0.1:40914). May 13 12:38:45.292103 sshd[5066]: Accepted publickey for core from 10.0.0.1 port 40914 ssh2: RSA SHA256:HV7SwMkgpUcGbG5PTBCNGAhaEvexdMAt2yN/TIbGAFk May 13 12:38:45.293644 sshd-session[5066]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:38:45.298105 systemd-logind[1485]: New session 18 of user core. May 13 12:38:45.308063 systemd[1]: Started session-18.scope - Session 18 of User core. May 13 12:38:45.482748 sshd[5068]: Connection closed by 10.0.0.1 port 40914 May 13 12:38:45.483214 sshd-session[5066]: pam_unix(sshd:session): session closed for user core May 13 12:38:45.494056 systemd[1]: sshd@17-10.0.0.46:22-10.0.0.1:40914.service: Deactivated successfully. May 13 12:38:45.495508 systemd[1]: session-18.scope: Deactivated successfully. May 13 12:38:45.496225 systemd-logind[1485]: Session 18 logged out. Waiting for processes to exit. May 13 12:38:45.498628 systemd[1]: Started sshd@18-10.0.0.46:22-10.0.0.1:40922.service - OpenSSH per-connection server daemon (10.0.0.1:40922). May 13 12:38:45.499455 systemd-logind[1485]: Removed session 18. May 13 12:38:45.552096 sshd[5084]: Accepted publickey for core from 10.0.0.1 port 40922 ssh2: RSA SHA256:HV7SwMkgpUcGbG5PTBCNGAhaEvexdMAt2yN/TIbGAFk May 13 12:38:45.553473 sshd-session[5084]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:38:45.558009 systemd-logind[1485]: New session 19 of user core. May 13 12:38:45.571051 systemd[1]: Started session-19.scope - Session 19 of User core. May 13 12:38:45.837267 sshd[5087]: Connection closed by 10.0.0.1 port 40922 May 13 12:38:45.837153 sshd-session[5084]: pam_unix(sshd:session): session closed for user core May 13 12:38:45.853009 systemd[1]: sshd@18-10.0.0.46:22-10.0.0.1:40922.service: Deactivated successfully. May 13 12:38:45.854472 systemd[1]: session-19.scope: Deactivated successfully. May 13 12:38:45.855113 systemd-logind[1485]: Session 19 logged out. Waiting for processes to exit. May 13 12:38:45.857554 systemd[1]: Started sshd@19-10.0.0.46:22-10.0.0.1:40934.service - OpenSSH per-connection server daemon (10.0.0.1:40934). May 13 12:38:45.858114 systemd-logind[1485]: Removed session 19. May 13 12:38:45.910134 sshd[5098]: Accepted publickey for core from 10.0.0.1 port 40934 ssh2: RSA SHA256:HV7SwMkgpUcGbG5PTBCNGAhaEvexdMAt2yN/TIbGAFk May 13 12:38:45.911486 sshd-session[5098]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:38:45.915957 systemd-logind[1485]: New session 20 of user core. May 13 12:38:45.924045 systemd[1]: Started session-20.scope - Session 20 of User core. May 13 12:38:46.203591 containerd[1515]: time="2025-05-13T12:38:46.203544904Z" level=info msg="TaskExit event in podsandbox handler container_id:\"95fe744f8779436f2050f937d0231a7dd411169d312f9fed858e4beca72ff58c\" id:\"b41568f6d47396ff071e4b956063d8586132972716500d8cafc0205a49b4b5ac\" pid:5120 exited_at:{seconds:1747139926 nanos:203260097}" May 13 12:38:47.456368 sshd[5101]: Connection closed by 10.0.0.1 port 40934 May 13 12:38:47.457051 sshd-session[5098]: pam_unix(sshd:session): session closed for user core May 13 12:38:47.469557 systemd[1]: sshd@19-10.0.0.46:22-10.0.0.1:40934.service: Deactivated successfully. May 13 12:38:47.472478 systemd[1]: session-20.scope: Deactivated successfully. May 13 12:38:47.473579 systemd[1]: session-20.scope: Consumed 516ms CPU time, 70.5M memory peak. May 13 12:38:47.474413 systemd-logind[1485]: Session 20 logged out. Waiting for processes to exit. May 13 12:38:47.479882 systemd[1]: Started sshd@20-10.0.0.46:22-10.0.0.1:40940.service - OpenSSH per-connection server daemon (10.0.0.1:40940). May 13 12:38:47.484199 systemd-logind[1485]: Removed session 20. May 13 12:38:47.542596 sshd[5145]: Accepted publickey for core from 10.0.0.1 port 40940 ssh2: RSA SHA256:HV7SwMkgpUcGbG5PTBCNGAhaEvexdMAt2yN/TIbGAFk May 13 12:38:47.543858 sshd-session[5145]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:38:47.548703 systemd-logind[1485]: New session 21 of user core. May 13 12:38:47.557094 systemd[1]: Started session-21.scope - Session 21 of User core. May 13 12:38:47.793059 sshd[5151]: Connection closed by 10.0.0.1 port 40940 May 13 12:38:47.793575 sshd-session[5145]: pam_unix(sshd:session): session closed for user core May 13 12:38:47.804940 systemd[1]: sshd@20-10.0.0.46:22-10.0.0.1:40940.service: Deactivated successfully. May 13 12:38:47.806671 systemd[1]: session-21.scope: Deactivated successfully. May 13 12:38:47.807996 systemd-logind[1485]: Session 21 logged out. Waiting for processes to exit. May 13 12:38:47.809447 systemd-logind[1485]: Removed session 21. May 13 12:38:47.810866 systemd[1]: Started sshd@21-10.0.0.46:22-10.0.0.1:40948.service - OpenSSH per-connection server daemon (10.0.0.1:40948). May 13 12:38:47.865601 sshd[5163]: Accepted publickey for core from 10.0.0.1 port 40948 ssh2: RSA SHA256:HV7SwMkgpUcGbG5PTBCNGAhaEvexdMAt2yN/TIbGAFk May 13 12:38:47.867047 sshd-session[5163]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:38:47.873035 systemd-logind[1485]: New session 22 of user core. May 13 12:38:47.882046 systemd[1]: Started session-22.scope - Session 22 of User core. May 13 12:38:48.043924 sshd[5165]: Connection closed by 10.0.0.1 port 40948 May 13 12:38:48.044558 sshd-session[5163]: pam_unix(sshd:session): session closed for user core May 13 12:38:48.048224 systemd[1]: sshd@21-10.0.0.46:22-10.0.0.1:40948.service: Deactivated successfully. May 13 12:38:48.049975 systemd[1]: session-22.scope: Deactivated successfully. May 13 12:38:48.050626 systemd-logind[1485]: Session 22 logged out. Waiting for processes to exit. May 13 12:38:48.051657 systemd-logind[1485]: Removed session 22. May 13 12:38:53.055107 systemd[1]: Started sshd@22-10.0.0.46:22-10.0.0.1:58506.service - OpenSSH per-connection server daemon (10.0.0.1:58506). May 13 12:38:53.115601 sshd[5185]: Accepted publickey for core from 10.0.0.1 port 58506 ssh2: RSA SHA256:HV7SwMkgpUcGbG5PTBCNGAhaEvexdMAt2yN/TIbGAFk May 13 12:38:53.116834 sshd-session[5185]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:38:53.120381 systemd-logind[1485]: New session 23 of user core. May 13 12:38:53.136031 systemd[1]: Started session-23.scope - Session 23 of User core. May 13 12:38:53.247236 sshd[5187]: Connection closed by 10.0.0.1 port 58506 May 13 12:38:53.247539 sshd-session[5185]: pam_unix(sshd:session): session closed for user core May 13 12:38:53.250934 systemd[1]: sshd@22-10.0.0.46:22-10.0.0.1:58506.service: Deactivated successfully. May 13 12:38:53.252579 systemd[1]: session-23.scope: Deactivated successfully. May 13 12:38:53.253230 systemd-logind[1485]: Session 23 logged out. Waiting for processes to exit. May 13 12:38:53.254598 systemd-logind[1485]: Removed session 23. May 13 12:38:58.262569 systemd[1]: Started sshd@23-10.0.0.46:22-10.0.0.1:58510.service - OpenSSH per-connection server daemon (10.0.0.1:58510). May 13 12:38:58.320607 sshd[5202]: Accepted publickey for core from 10.0.0.1 port 58510 ssh2: RSA SHA256:HV7SwMkgpUcGbG5PTBCNGAhaEvexdMAt2yN/TIbGAFk May 13 12:38:58.322053 sshd-session[5202]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:38:58.326040 systemd-logind[1485]: New session 24 of user core. May 13 12:38:58.340106 systemd[1]: Started session-24.scope - Session 24 of User core. May 13 12:38:58.457454 sshd[5204]: Connection closed by 10.0.0.1 port 58510 May 13 12:38:58.457618 sshd-session[5202]: pam_unix(sshd:session): session closed for user core May 13 12:38:58.462241 systemd[1]: sshd@23-10.0.0.46:22-10.0.0.1:58510.service: Deactivated successfully. May 13 12:38:58.464082 systemd[1]: session-24.scope: Deactivated successfully. May 13 12:38:58.465929 systemd-logind[1485]: Session 24 logged out. Waiting for processes to exit. May 13 12:38:58.468381 systemd-logind[1485]: Removed session 24. May 13 12:39:03.472326 systemd[1]: Started sshd@24-10.0.0.46:22-10.0.0.1:47304.service - OpenSSH per-connection server daemon (10.0.0.1:47304). May 13 12:39:03.526542 sshd[5223]: Accepted publickey for core from 10.0.0.1 port 47304 ssh2: RSA SHA256:HV7SwMkgpUcGbG5PTBCNGAhaEvexdMAt2yN/TIbGAFk May 13 12:39:03.527680 sshd-session[5223]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:39:03.532601 systemd-logind[1485]: New session 25 of user core. May 13 12:39:03.540413 systemd[1]: Started session-25.scope - Session 25 of User core. May 13 12:39:03.667736 sshd[5225]: Connection closed by 10.0.0.1 port 47304 May 13 12:39:03.668064 sshd-session[5223]: pam_unix(sshd:session): session closed for user core May 13 12:39:03.671640 systemd[1]: sshd@24-10.0.0.46:22-10.0.0.1:47304.service: Deactivated successfully. May 13 12:39:03.673378 systemd[1]: session-25.scope: Deactivated successfully. May 13 12:39:03.677780 systemd-logind[1485]: Session 25 logged out. Waiting for processes to exit. May 13 12:39:03.679626 systemd-logind[1485]: Removed session 25.