May 12 13:27:41.952101 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] May 12 13:27:41.952121 kernel: Linux version 6.6.89-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT Mon May 12 12:12:07 -00 2025 May 12 13:27:41.952131 kernel: KASLR enabled May 12 13:27:41.952136 kernel: efi: EFI v2.7 by EDK II May 12 13:27:41.952142 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb832018 ACPI 2.0=0xdbfd0018 RNG=0xdbfd0a18 MEMRESERVE=0xdb838218 May 12 13:27:41.952148 kernel: random: crng init done May 12 13:27:41.952154 kernel: secureboot: Secure boot disabled May 12 13:27:41.952160 kernel: ACPI: Early table checksum verification disabled May 12 13:27:41.952166 kernel: ACPI: RSDP 0x00000000DBFD0018 000024 (v02 BOCHS ) May 12 13:27:41.952173 kernel: ACPI: XSDT 0x00000000DBFD0F18 000064 (v01 BOCHS BXPC 00000001 01000013) May 12 13:27:41.952179 kernel: ACPI: FACP 0x00000000DBFD0B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) May 12 13:27:41.952185 kernel: ACPI: DSDT 0x00000000DBF0E018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 12 13:27:41.952190 kernel: ACPI: APIC 0x00000000DBFD0C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) May 12 13:27:41.952196 kernel: ACPI: PPTT 0x00000000DBFD0098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) May 12 13:27:41.952203 kernel: ACPI: GTDT 0x00000000DBFD0818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 12 13:27:41.952211 kernel: ACPI: MCFG 0x00000000DBFD0A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 12 13:27:41.952217 kernel: ACPI: SPCR 0x00000000DBFD0918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 12 13:27:41.952223 kernel: ACPI: DBG2 0x00000000DBFD0998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) May 12 13:27:41.952229 kernel: ACPI: IORT 0x00000000DBFD0198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 12 13:27:41.952234 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 May 12 13:27:41.952240 kernel: NUMA: Failed to initialise from firmware May 12 13:27:41.952246 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] May 12 13:27:41.952253 kernel: NUMA: NODE_DATA [mem 0xdc956e00-0xdc95dfff] May 12 13:27:41.952258 kernel: Zone ranges: May 12 13:27:41.952273 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] May 12 13:27:41.952282 kernel: DMA32 empty May 12 13:27:41.952288 kernel: Normal empty May 12 13:27:41.952294 kernel: Device empty May 12 13:27:41.952300 kernel: Movable zone start for each node May 12 13:27:41.952306 kernel: Early memory node ranges May 12 13:27:41.952312 kernel: node 0: [mem 0x0000000040000000-0x00000000db81ffff] May 12 13:27:41.952318 kernel: node 0: [mem 0x00000000db820000-0x00000000db82ffff] May 12 13:27:41.952324 kernel: node 0: [mem 0x00000000db830000-0x00000000dc09ffff] May 12 13:27:41.952330 kernel: node 0: [mem 0x00000000dc0a0000-0x00000000dc2dffff] May 12 13:27:41.952336 kernel: node 0: [mem 0x00000000dc2e0000-0x00000000dc36ffff] May 12 13:27:41.952342 kernel: node 0: [mem 0x00000000dc370000-0x00000000dc45ffff] May 12 13:27:41.952348 kernel: node 0: [mem 0x00000000dc460000-0x00000000dc52ffff] May 12 13:27:41.952354 kernel: node 0: [mem 0x00000000dc530000-0x00000000dc5cffff] May 12 13:27:41.952361 kernel: node 0: [mem 0x00000000dc5d0000-0x00000000dce1ffff] May 12 13:27:41.952367 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] May 12 13:27:41.952376 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] May 12 13:27:41.952383 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] May 12 13:27:41.952390 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] May 12 13:27:41.952397 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] May 12 13:27:41.952404 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges May 12 13:27:41.952410 kernel: psci: probing for conduit method from ACPI. May 12 13:27:41.952431 kernel: psci: PSCIv1.1 detected in firmware. May 12 13:27:41.952437 kernel: psci: Using standard PSCI v0.2 function IDs May 12 13:27:41.952444 kernel: psci: Trusted OS migration not required May 12 13:27:41.952451 kernel: psci: SMC Calling Convention v1.1 May 12 13:27:41.952458 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) May 12 13:27:41.952464 kernel: percpu: Embedded 31 pages/cpu s87016 r8192 d31768 u126976 May 12 13:27:41.952471 kernel: pcpu-alloc: s87016 r8192 d31768 u126976 alloc=31*4096 May 12 13:27:41.952477 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 May 12 13:27:41.952485 kernel: Detected PIPT I-cache on CPU0 May 12 13:27:41.952492 kernel: CPU features: detected: GIC system register CPU interface May 12 13:27:41.952498 kernel: CPU features: detected: Hardware dirty bit management May 12 13:27:41.952504 kernel: CPU features: detected: Spectre-v4 May 12 13:27:41.952511 kernel: CPU features: detected: Spectre-BHB May 12 13:27:41.952517 kernel: CPU features: kernel page table isolation forced ON by KASLR May 12 13:27:41.952523 kernel: CPU features: detected: Kernel page table isolation (KPTI) May 12 13:27:41.952530 kernel: CPU features: detected: ARM erratum 1418040 May 12 13:27:41.952536 kernel: CPU features: detected: SSBS not fully self-synchronizing May 12 13:27:41.952542 kernel: alternatives: applying boot alternatives May 12 13:27:41.952550 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=653a96bf2da883847e3396e932e31f09e53181a834ffc22434c3993d29b70a16 May 12 13:27:41.952558 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 12 13:27:41.952565 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 12 13:27:41.952571 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 12 13:27:41.952578 kernel: Fallback order for Node 0: 0 May 12 13:27:41.952584 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 May 12 13:27:41.952590 kernel: Policy zone: DMA May 12 13:27:41.952597 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 12 13:27:41.952603 kernel: software IO TLB: area num 4. May 12 13:27:41.952609 kernel: software IO TLB: mapped [mem 0x00000000d5000000-0x00000000d9000000] (64MB) May 12 13:27:41.952616 kernel: Memory: 2386504K/2572288K available (10432K kernel code, 2202K rwdata, 8168K rodata, 39040K init, 993K bss, 185784K reserved, 0K cma-reserved) May 12 13:27:41.952623 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 12 13:27:41.952631 kernel: rcu: Preemptible hierarchical RCU implementation. May 12 13:27:41.952637 kernel: rcu: RCU event tracing is enabled. May 12 13:27:41.952644 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 12 13:27:41.952651 kernel: Trampoline variant of Tasks RCU enabled. May 12 13:27:41.952657 kernel: Tracing variant of Tasks RCU enabled. May 12 13:27:41.952663 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 12 13:27:41.952670 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 12 13:27:41.952676 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 May 12 13:27:41.952683 kernel: GICv3: 256 SPIs implemented May 12 13:27:41.952689 kernel: GICv3: 0 Extended SPIs implemented May 12 13:27:41.952695 kernel: Root IRQ handler: gic_handle_irq May 12 13:27:41.952701 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI May 12 13:27:41.952709 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 May 12 13:27:41.952716 kernel: ITS [mem 0x08080000-0x0809ffff] May 12 13:27:41.952722 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400d0000 (indirect, esz 8, psz 64K, shr 1) May 12 13:27:41.952729 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400e0000 (flat, esz 8, psz 64K, shr 1) May 12 13:27:41.952735 kernel: GICv3: using LPI property table @0x00000000400f0000 May 12 13:27:41.952741 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 May 12 13:27:41.952748 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 12 13:27:41.952754 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 12 13:27:41.952761 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). May 12 13:27:41.952767 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns May 12 13:27:41.952774 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns May 12 13:27:41.952782 kernel: arm-pv: using stolen time PV May 12 13:27:41.952788 kernel: Console: colour dummy device 80x25 May 12 13:27:41.952795 kernel: ACPI: Core revision 20230628 May 12 13:27:41.952802 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) May 12 13:27:41.952809 kernel: pid_max: default: 32768 minimum: 301 May 12 13:27:41.952815 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 12 13:27:41.952822 kernel: landlock: Up and running. May 12 13:27:41.952829 kernel: SELinux: Initializing. May 12 13:27:41.952835 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 12 13:27:41.952843 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 12 13:27:41.952850 kernel: ACPI PPTT: PPTT table found, but unable to locate core 3 (3) May 12 13:27:41.952856 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 12 13:27:41.952863 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 12 13:27:41.952870 kernel: rcu: Hierarchical SRCU implementation. May 12 13:27:41.952876 kernel: rcu: Max phase no-delay instances is 400. May 12 13:27:41.952883 kernel: Platform MSI: ITS@0x8080000 domain created May 12 13:27:41.952889 kernel: PCI/MSI: ITS@0x8080000 domain created May 12 13:27:41.952896 kernel: Remapping and enabling EFI services. May 12 13:27:41.952904 kernel: smp: Bringing up secondary CPUs ... May 12 13:27:41.952915 kernel: Detected PIPT I-cache on CPU1 May 12 13:27:41.952922 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 May 12 13:27:41.952930 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 May 12 13:27:41.952937 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 12 13:27:41.952944 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] May 12 13:27:41.952963 kernel: Detected PIPT I-cache on CPU2 May 12 13:27:41.952971 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 May 12 13:27:41.952978 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 May 12 13:27:41.952987 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 12 13:27:41.952995 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] May 12 13:27:41.953001 kernel: Detected PIPT I-cache on CPU3 May 12 13:27:41.953008 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 May 12 13:27:41.953015 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 May 12 13:27:41.953022 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 12 13:27:41.953029 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] May 12 13:27:41.953036 kernel: smp: Brought up 1 node, 4 CPUs May 12 13:27:41.953042 kernel: SMP: Total of 4 processors activated. May 12 13:27:41.953051 kernel: CPU features: detected: 32-bit EL0 Support May 12 13:27:41.953057 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence May 12 13:27:41.953064 kernel: CPU features: detected: Common not Private translations May 12 13:27:41.953071 kernel: CPU features: detected: CRC32 instructions May 12 13:27:41.953078 kernel: CPU features: detected: Enhanced Virtualization Traps May 12 13:27:41.953085 kernel: CPU features: detected: RCpc load-acquire (LDAPR) May 12 13:27:41.953091 kernel: CPU features: detected: LSE atomic instructions May 12 13:27:41.953098 kernel: CPU features: detected: Privileged Access Never May 12 13:27:41.953105 kernel: CPU features: detected: RAS Extension Support May 12 13:27:41.953114 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) May 12 13:27:41.953120 kernel: CPU: All CPU(s) started at EL1 May 12 13:27:41.953127 kernel: alternatives: applying system-wide alternatives May 12 13:27:41.953134 kernel: devtmpfs: initialized May 12 13:27:41.953141 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 12 13:27:41.953149 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 12 13:27:41.953156 kernel: pinctrl core: initialized pinctrl subsystem May 12 13:27:41.953163 kernel: SMBIOS 3.0.0 present. May 12 13:27:41.953170 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 May 12 13:27:41.953178 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 12 13:27:41.953198 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations May 12 13:27:41.953205 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations May 12 13:27:41.953213 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations May 12 13:27:41.953220 kernel: audit: initializing netlink subsys (disabled) May 12 13:27:41.953227 kernel: audit: type=2000 audit(0.018:1): state=initialized audit_enabled=0 res=1 May 12 13:27:41.953239 kernel: thermal_sys: Registered thermal governor 'step_wise' May 12 13:27:41.953246 kernel: cpuidle: using governor menu May 12 13:27:41.953253 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. May 12 13:27:41.953262 kernel: ASID allocator initialised with 32768 entries May 12 13:27:41.953276 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 12 13:27:41.953283 kernel: Serial: AMBA PL011 UART driver May 12 13:27:41.953290 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL May 12 13:27:41.953296 kernel: Modules: 0 pages in range for non-PLT usage May 12 13:27:41.953303 kernel: Modules: 509024 pages in range for PLT usage May 12 13:27:41.953310 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 12 13:27:41.953317 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page May 12 13:27:41.953324 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages May 12 13:27:41.953333 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page May 12 13:27:41.953340 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 12 13:27:41.953347 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page May 12 13:27:41.953354 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages May 12 13:27:41.953361 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page May 12 13:27:41.953369 kernel: ACPI: Added _OSI(Module Device) May 12 13:27:41.953375 kernel: ACPI: Added _OSI(Processor Device) May 12 13:27:41.953382 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 12 13:27:41.953389 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 12 13:27:41.953398 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 12 13:27:41.953405 kernel: ACPI: Interpreter enabled May 12 13:27:41.953411 kernel: ACPI: Using GIC for interrupt routing May 12 13:27:41.953418 kernel: ACPI: MCFG table detected, 1 entries May 12 13:27:41.953425 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA May 12 13:27:41.953432 kernel: printk: console [ttyAMA0] enabled May 12 13:27:41.953439 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 12 13:27:41.953572 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 12 13:27:41.953649 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] May 12 13:27:41.953712 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] May 12 13:27:41.953789 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 May 12 13:27:41.953851 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] May 12 13:27:41.953864 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] May 12 13:27:41.953871 kernel: PCI host bridge to bus 0000:00 May 12 13:27:41.953943 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] May 12 13:27:41.954026 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] May 12 13:27:41.954085 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] May 12 13:27:41.954143 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 12 13:27:41.954233 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 May 12 13:27:41.954326 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 May 12 13:27:41.954398 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] May 12 13:27:41.954462 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] May 12 13:27:41.954530 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] May 12 13:27:41.954594 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] May 12 13:27:41.954657 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] May 12 13:27:41.954721 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] May 12 13:27:41.954778 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] May 12 13:27:41.954836 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] May 12 13:27:41.954894 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] May 12 13:27:41.954905 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 May 12 13:27:41.954912 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 May 12 13:27:41.954919 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 May 12 13:27:41.954926 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 May 12 13:27:41.954932 kernel: iommu: Default domain type: Translated May 12 13:27:41.954939 kernel: iommu: DMA domain TLB invalidation policy: strict mode May 12 13:27:41.954946 kernel: efivars: Registered efivars operations May 12 13:27:41.954964 kernel: vgaarb: loaded May 12 13:27:41.954971 kernel: clocksource: Switched to clocksource arch_sys_counter May 12 13:27:41.954981 kernel: VFS: Disk quotas dquot_6.6.0 May 12 13:27:41.954988 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 12 13:27:41.954994 kernel: pnp: PnP ACPI init May 12 13:27:41.955070 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved May 12 13:27:41.955081 kernel: pnp: PnP ACPI: found 1 devices May 12 13:27:41.955087 kernel: NET: Registered PF_INET protocol family May 12 13:27:41.955095 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 12 13:27:41.955102 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 12 13:27:41.955111 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 12 13:27:41.955118 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 12 13:27:41.955125 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 12 13:27:41.955132 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 12 13:27:41.955139 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 12 13:27:41.955146 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 12 13:27:41.955153 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 12 13:27:41.955160 kernel: PCI: CLS 0 bytes, default 64 May 12 13:27:41.955173 kernel: kvm [1]: HYP mode not available May 12 13:27:41.955182 kernel: Initialise system trusted keyrings May 12 13:27:41.955189 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 12 13:27:41.955196 kernel: Key type asymmetric registered May 12 13:27:41.955203 kernel: Asymmetric key parser 'x509' registered May 12 13:27:41.955209 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) May 12 13:27:41.955216 kernel: io scheduler mq-deadline registered May 12 13:27:41.955223 kernel: io scheduler kyber registered May 12 13:27:41.955230 kernel: io scheduler bfq registered May 12 13:27:41.955237 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 May 12 13:27:41.955245 kernel: ACPI: button: Power Button [PWRB] May 12 13:27:41.955253 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 May 12 13:27:41.955330 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) May 12 13:27:41.955341 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 12 13:27:41.955348 kernel: thunder_xcv, ver 1.0 May 12 13:27:41.955355 kernel: thunder_bgx, ver 1.0 May 12 13:27:41.955361 kernel: nicpf, ver 1.0 May 12 13:27:41.955368 kernel: nicvf, ver 1.0 May 12 13:27:41.955442 kernel: rtc-efi rtc-efi.0: registered as rtc0 May 12 13:27:41.955507 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-05-12T13:27:41 UTC (1747056461) May 12 13:27:41.955516 kernel: hid: raw HID events driver (C) Jiri Kosina May 12 13:27:41.955523 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available May 12 13:27:41.955531 kernel: watchdog: Delayed init of the lockup detector failed: -19 May 12 13:27:41.955538 kernel: watchdog: Hard watchdog permanently disabled May 12 13:27:41.955544 kernel: NET: Registered PF_INET6 protocol family May 12 13:27:41.955551 kernel: Segment Routing with IPv6 May 12 13:27:41.955558 kernel: In-situ OAM (IOAM) with IPv6 May 12 13:27:41.955567 kernel: NET: Registered PF_PACKET protocol family May 12 13:27:41.955573 kernel: Key type dns_resolver registered May 12 13:27:41.955580 kernel: registered taskstats version 1 May 12 13:27:41.955587 kernel: Loading compiled-in X.509 certificates May 12 13:27:41.955594 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.89-flatcar: 8a19376c4ffd0604cd5425566348a3f0eeb277da' May 12 13:27:41.955601 kernel: Key type .fscrypt registered May 12 13:27:41.955608 kernel: Key type fscrypt-provisioning registered May 12 13:27:41.955615 kernel: ima: No TPM chip found, activating TPM-bypass! May 12 13:27:41.955622 kernel: ima: Allocated hash algorithm: sha1 May 12 13:27:41.955631 kernel: ima: No architecture policies found May 12 13:27:41.955638 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) May 12 13:27:41.955644 kernel: clk: Disabling unused clocks May 12 13:27:41.955651 kernel: Warning: unable to open an initial console. May 12 13:27:41.955658 kernel: Freeing unused kernel memory: 39040K May 12 13:27:41.955665 kernel: Run /init as init process May 12 13:27:41.955672 kernel: with arguments: May 12 13:27:41.955679 kernel: /init May 12 13:27:41.955686 kernel: with environment: May 12 13:27:41.955695 kernel: HOME=/ May 12 13:27:41.955701 kernel: TERM=linux May 12 13:27:41.955708 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 12 13:27:41.955716 systemd[1]: Successfully made /usr/ read-only. May 12 13:27:41.955726 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 12 13:27:41.955734 systemd[1]: Detected virtualization kvm. May 12 13:27:41.955741 systemd[1]: Detected architecture arm64. May 12 13:27:41.955749 systemd[1]: Running in initrd. May 12 13:27:41.955756 systemd[1]: No hostname configured, using default hostname. May 12 13:27:41.955764 systemd[1]: Hostname set to . May 12 13:27:41.955771 systemd[1]: Initializing machine ID from VM UUID. May 12 13:27:41.955778 systemd[1]: Queued start job for default target initrd.target. May 12 13:27:41.955786 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 12 13:27:41.955793 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 12 13:27:41.955801 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 12 13:27:41.955810 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 12 13:27:41.955817 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 12 13:27:41.955825 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 12 13:27:41.955833 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 12 13:27:41.955841 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 12 13:27:41.955848 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 12 13:27:41.955855 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 12 13:27:41.955864 systemd[1]: Reached target paths.target - Path Units. May 12 13:27:41.955872 systemd[1]: Reached target slices.target - Slice Units. May 12 13:27:41.955879 systemd[1]: Reached target swap.target - Swaps. May 12 13:27:41.955886 systemd[1]: Reached target timers.target - Timer Units. May 12 13:27:41.955893 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 12 13:27:41.955901 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 12 13:27:41.955908 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 12 13:27:41.955915 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 12 13:27:41.955923 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 12 13:27:41.955932 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 12 13:27:41.955939 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 12 13:27:41.955956 systemd[1]: Reached target sockets.target - Socket Units. May 12 13:27:41.955966 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 12 13:27:41.955974 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 12 13:27:41.955981 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 12 13:27:41.955989 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). May 12 13:27:41.955997 systemd[1]: Starting systemd-fsck-usr.service... May 12 13:27:41.956006 systemd[1]: Starting systemd-journald.service - Journal Service... May 12 13:27:41.956014 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 12 13:27:41.956021 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 12 13:27:41.956028 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 12 13:27:41.956036 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 12 13:27:41.956045 systemd[1]: Finished systemd-fsck-usr.service. May 12 13:27:41.956053 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 12 13:27:41.956076 systemd-journald[237]: Collecting audit messages is disabled. May 12 13:27:41.956094 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 12 13:27:41.956104 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 12 13:27:41.956111 kernel: Bridge firewalling registered May 12 13:27:41.956118 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 12 13:27:41.956126 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 12 13:27:41.956134 systemd-journald[237]: Journal started May 12 13:27:41.956152 systemd-journald[237]: Runtime Journal (/run/log/journal/d18733d2175e47fdb5a281d3e4ffaab7) is 5.9M, max 47.3M, 41.4M free. May 12 13:27:41.936474 systemd-modules-load[238]: Inserted module 'overlay' May 12 13:27:41.951489 systemd-modules-load[238]: Inserted module 'br_netfilter' May 12 13:27:41.961071 systemd[1]: Started systemd-journald.service - Journal Service. May 12 13:27:41.961337 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 12 13:27:41.965690 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 12 13:27:41.968306 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 12 13:27:41.977888 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 12 13:27:41.979464 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 12 13:27:41.985058 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 12 13:27:41.986337 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 12 13:27:41.988608 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 12 13:27:41.991253 systemd-tmpfiles[274]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. May 12 13:27:41.994057 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 12 13:27:41.996839 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 12 13:27:42.001019 dracut-cmdline[278]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=653a96bf2da883847e3396e932e31f09e53181a834ffc22434c3993d29b70a16 May 12 13:27:42.035916 systemd-resolved[293]: Positive Trust Anchors: May 12 13:27:42.035937 systemd-resolved[293]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 12 13:27:42.035982 systemd-resolved[293]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 12 13:27:42.040747 systemd-resolved[293]: Defaulting to hostname 'linux'. May 12 13:27:42.044358 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 12 13:27:42.045526 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 12 13:27:42.069983 kernel: SCSI subsystem initialized May 12 13:27:42.074970 kernel: Loading iSCSI transport class v2.0-870. May 12 13:27:42.081966 kernel: iscsi: registered transport (tcp) May 12 13:27:42.096977 kernel: iscsi: registered transport (qla4xxx) May 12 13:27:42.096998 kernel: QLogic iSCSI HBA Driver May 12 13:27:42.113496 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 12 13:27:42.140009 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 12 13:27:42.142316 systemd[1]: Reached target network-pre.target - Preparation for Network. May 12 13:27:42.188464 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 12 13:27:42.190737 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 12 13:27:42.249983 kernel: raid6: neonx8 gen() 15764 MB/s May 12 13:27:42.266980 kernel: raid6: neonx4 gen() 15779 MB/s May 12 13:27:42.283975 kernel: raid6: neonx2 gen() 13223 MB/s May 12 13:27:42.300983 kernel: raid6: neonx1 gen() 10489 MB/s May 12 13:27:42.317984 kernel: raid6: int64x8 gen() 6774 MB/s May 12 13:27:42.334983 kernel: raid6: int64x4 gen() 7333 MB/s May 12 13:27:42.351984 kernel: raid6: int64x2 gen() 6096 MB/s May 12 13:27:42.369091 kernel: raid6: int64x1 gen() 5040 MB/s May 12 13:27:42.369106 kernel: raid6: using algorithm neonx4 gen() 15779 MB/s May 12 13:27:42.387279 kernel: raid6: .... xor() 12307 MB/s, rmw enabled May 12 13:27:42.387311 kernel: raid6: using neon recovery algorithm May 12 13:27:42.394385 kernel: xor: measuring software checksum speed May 12 13:27:42.394402 kernel: 8regs : 21653 MB/sec May 12 13:27:42.395043 kernel: 32regs : 21693 MB/sec May 12 13:27:42.396284 kernel: arm64_neon : 26848 MB/sec May 12 13:27:42.396299 kernel: xor: using function: arm64_neon (26848 MB/sec) May 12 13:27:42.443974 kernel: Btrfs loaded, zoned=no, fsverity=no May 12 13:27:42.450060 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 12 13:27:42.452697 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 12 13:27:42.488968 systemd-udevd[495]: Using default interface naming scheme 'v255'. May 12 13:27:42.493212 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 12 13:27:42.496121 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 12 13:27:42.525029 dracut-pre-trigger[503]: rd.md=0: removing MD RAID activation May 12 13:27:42.545604 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 12 13:27:42.547821 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 12 13:27:42.604345 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 12 13:27:42.607582 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 12 13:27:42.648976 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues May 12 13:27:42.654359 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 12 13:27:42.661851 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 12 13:27:42.661899 kernel: GPT:9289727 != 19775487 May 12 13:27:42.661910 kernel: GPT:Alternate GPT header not at the end of the disk. May 12 13:27:42.663272 kernel: GPT:9289727 != 19775487 May 12 13:27:42.663308 kernel: GPT: Use GNU Parted to correct GPT errors. May 12 13:27:42.664006 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 12 13:27:42.664942 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 12 13:27:42.665080 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 12 13:27:42.668387 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 12 13:27:42.670665 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 12 13:27:42.689965 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (551) May 12 13:27:42.689999 kernel: BTRFS: device fsid 883e681e-770a-479b-951e-bb0dc342f721 devid 1 transid 42 /dev/vda3 scanned by (udev-worker) (556) May 12 13:27:42.692787 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 12 13:27:42.707458 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 12 13:27:42.708920 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 12 13:27:42.712288 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 12 13:27:42.733031 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 12 13:27:42.740253 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 12 13:27:42.741446 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 12 13:27:42.743747 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 12 13:27:42.746714 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 12 13:27:42.748818 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 12 13:27:42.751624 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 12 13:27:42.753416 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 12 13:27:42.769283 disk-uuid[586]: Primary Header is updated. May 12 13:27:42.769283 disk-uuid[586]: Secondary Entries is updated. May 12 13:27:42.769283 disk-uuid[586]: Secondary Header is updated. May 12 13:27:42.772603 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 12 13:27:42.778996 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 12 13:27:43.782977 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 12 13:27:43.783736 disk-uuid[589]: The operation has completed successfully. May 12 13:27:43.811498 systemd[1]: disk-uuid.service: Deactivated successfully. May 12 13:27:43.811604 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 12 13:27:43.840331 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 12 13:27:43.854735 sh[606]: Success May 12 13:27:43.868373 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 12 13:27:43.868416 kernel: device-mapper: uevent: version 1.0.3 May 12 13:27:43.869440 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 12 13:27:43.880924 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" May 12 13:27:43.905971 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 12 13:27:43.908841 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 12 13:27:43.923213 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 12 13:27:43.930555 kernel: BTRFS info (device dm-0): first mount of filesystem 883e681e-770a-479b-951e-bb0dc342f721 May 12 13:27:43.930591 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm May 12 13:27:43.930602 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 12 13:27:43.931675 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 12 13:27:43.933077 kernel: BTRFS info (device dm-0): using free space tree May 12 13:27:43.936244 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 12 13:27:43.937511 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. May 12 13:27:43.938935 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 12 13:27:43.939697 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 12 13:27:43.941298 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 12 13:27:43.969610 kernel: BTRFS info (device vda6): first mount of filesystem c2183054-24ef-4008-8a3e-033aff1dab63 May 12 13:27:43.969650 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 12 13:27:43.969660 kernel: BTRFS info (device vda6): using free space tree May 12 13:27:43.971965 kernel: BTRFS info (device vda6): auto enabling async discard May 12 13:27:43.975964 kernel: BTRFS info (device vda6): last unmount of filesystem c2183054-24ef-4008-8a3e-033aff1dab63 May 12 13:27:43.979117 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 12 13:27:43.981010 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 12 13:27:44.053592 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 12 13:27:44.058124 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 12 13:27:44.103180 systemd-networkd[798]: lo: Link UP May 12 13:27:44.103194 systemd-networkd[798]: lo: Gained carrier May 12 13:27:44.103984 systemd-networkd[798]: Enumeration completed May 12 13:27:44.104433 systemd-networkd[798]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 12 13:27:44.104436 systemd-networkd[798]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 12 13:27:44.105152 systemd-networkd[798]: eth0: Link UP May 12 13:27:44.105155 systemd-networkd[798]: eth0: Gained carrier May 12 13:27:44.105164 systemd-networkd[798]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 12 13:27:44.106057 systemd[1]: Started systemd-networkd.service - Network Configuration. May 12 13:27:44.107266 systemd[1]: Reached target network.target - Network. May 12 13:27:44.117014 ignition[701]: Ignition 2.21.0 May 12 13:27:44.117025 ignition[701]: Stage: fetch-offline May 12 13:27:44.117100 ignition[701]: no configs at "/usr/lib/ignition/base.d" May 12 13:27:44.117110 ignition[701]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 12 13:27:44.117790 ignition[701]: parsed url from cmdline: "" May 12 13:27:44.117795 ignition[701]: no config URL provided May 12 13:27:44.117806 ignition[701]: reading system config file "/usr/lib/ignition/user.ign" May 12 13:27:44.122013 systemd-networkd[798]: eth0: DHCPv4 address 10.0.0.76/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 12 13:27:44.118697 ignition[701]: no config at "/usr/lib/ignition/user.ign" May 12 13:27:44.118731 ignition[701]: op(1): [started] loading QEMU firmware config module May 12 13:27:44.118735 ignition[701]: op(1): executing: "modprobe" "qemu_fw_cfg" May 12 13:27:44.129244 ignition[701]: op(1): [finished] loading QEMU firmware config module May 12 13:27:44.166314 ignition[701]: parsing config with SHA512: 3b1126671d75c1067cfe8a3cb1a08c83fc2c8ed5b93cda8ca25b0311cb1c7113f8238a67c6d0c1de0aa3d891bddb5f1eb1bd3bb7b16f51b65f3c152201e87287 May 12 13:27:44.170481 unknown[701]: fetched base config from "system" May 12 13:27:44.170495 unknown[701]: fetched user config from "qemu" May 12 13:27:44.170794 ignition[701]: fetch-offline: fetch-offline passed May 12 13:27:44.170846 ignition[701]: Ignition finished successfully May 12 13:27:44.174117 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 12 13:27:44.175675 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 12 13:27:44.178174 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 12 13:27:44.203411 ignition[806]: Ignition 2.21.0 May 12 13:27:44.203426 ignition[806]: Stage: kargs May 12 13:27:44.203545 ignition[806]: no configs at "/usr/lib/ignition/base.d" May 12 13:27:44.203554 ignition[806]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 12 13:27:44.205219 ignition[806]: kargs: kargs passed May 12 13:27:44.207520 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 12 13:27:44.205286 ignition[806]: Ignition finished successfully May 12 13:27:44.209434 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 12 13:27:44.240864 ignition[814]: Ignition 2.21.0 May 12 13:27:44.240882 ignition[814]: Stage: disks May 12 13:27:44.241039 ignition[814]: no configs at "/usr/lib/ignition/base.d" May 12 13:27:44.241050 ignition[814]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 12 13:27:44.243982 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 12 13:27:44.241967 ignition[814]: disks: disks passed May 12 13:27:44.246022 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 12 13:27:44.242146 ignition[814]: Ignition finished successfully May 12 13:27:44.247672 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 12 13:27:44.249291 systemd[1]: Reached target local-fs.target - Local File Systems. May 12 13:27:44.251140 systemd[1]: Reached target sysinit.target - System Initialization. May 12 13:27:44.252682 systemd[1]: Reached target basic.target - Basic System. May 12 13:27:44.255341 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 12 13:27:44.274290 systemd-fsck[824]: ROOT: clean, 15/553520 files, 52789/553472 blocks May 12 13:27:44.277656 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 12 13:27:44.279695 systemd[1]: Mounting sysroot.mount - /sysroot... May 12 13:27:44.339974 kernel: EXT4-fs (vda9): mounted filesystem bc1f18c3-3425-4388-a617-b7347003d935 r/w with ordered data mode. Quota mode: none. May 12 13:27:44.340408 systemd[1]: Mounted sysroot.mount - /sysroot. May 12 13:27:44.341665 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 12 13:27:44.343912 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 12 13:27:44.345500 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 12 13:27:44.346529 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 12 13:27:44.346570 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 12 13:27:44.346592 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 12 13:27:44.358421 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 12 13:27:44.360752 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 12 13:27:44.365885 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (833) May 12 13:27:44.365906 kernel: BTRFS info (device vda6): first mount of filesystem c2183054-24ef-4008-8a3e-033aff1dab63 May 12 13:27:44.365915 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 12 13:27:44.365930 kernel: BTRFS info (device vda6): using free space tree May 12 13:27:44.368975 kernel: BTRFS info (device vda6): auto enabling async discard May 12 13:27:44.369454 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 12 13:27:44.410259 initrd-setup-root[857]: cut: /sysroot/etc/passwd: No such file or directory May 12 13:27:44.414027 initrd-setup-root[864]: cut: /sysroot/etc/group: No such file or directory May 12 13:27:44.417986 initrd-setup-root[871]: cut: /sysroot/etc/shadow: No such file or directory May 12 13:27:44.421614 initrd-setup-root[878]: cut: /sysroot/etc/gshadow: No such file or directory May 12 13:27:44.489083 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 12 13:27:44.493022 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 12 13:27:44.494554 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 12 13:27:44.506985 kernel: BTRFS info (device vda6): last unmount of filesystem c2183054-24ef-4008-8a3e-033aff1dab63 May 12 13:27:44.515631 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 12 13:27:44.533930 ignition[948]: INFO : Ignition 2.21.0 May 12 13:27:44.533930 ignition[948]: INFO : Stage: mount May 12 13:27:44.536113 ignition[948]: INFO : no configs at "/usr/lib/ignition/base.d" May 12 13:27:44.536113 ignition[948]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 12 13:27:44.536113 ignition[948]: INFO : mount: mount passed May 12 13:27:44.536113 ignition[948]: INFO : Ignition finished successfully May 12 13:27:44.537586 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 12 13:27:44.542357 systemd[1]: Starting ignition-files.service - Ignition (files)... May 12 13:27:44.937219 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 12 13:27:44.938818 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 12 13:27:44.953992 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (959) May 12 13:27:44.956458 kernel: BTRFS info (device vda6): first mount of filesystem c2183054-24ef-4008-8a3e-033aff1dab63 May 12 13:27:44.956481 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 12 13:27:44.957157 kernel: BTRFS info (device vda6): using free space tree May 12 13:27:44.959991 kernel: BTRFS info (device vda6): auto enabling async discard May 12 13:27:44.960570 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 12 13:27:44.983836 ignition[977]: INFO : Ignition 2.21.0 May 12 13:27:44.983836 ignition[977]: INFO : Stage: files May 12 13:27:44.985491 ignition[977]: INFO : no configs at "/usr/lib/ignition/base.d" May 12 13:27:44.985491 ignition[977]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 12 13:27:44.985491 ignition[977]: DEBUG : files: compiled without relabeling support, skipping May 12 13:27:44.985491 ignition[977]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 12 13:27:44.985491 ignition[977]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 12 13:27:44.991563 ignition[977]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 12 13:27:44.991563 ignition[977]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 12 13:27:44.991563 ignition[977]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 12 13:27:44.991563 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 12 13:27:44.991563 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 May 12 13:27:44.987570 unknown[977]: wrote ssh authorized keys file for user: core May 12 13:27:45.027262 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 12 13:27:45.285982 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 12 13:27:45.289286 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" May 12 13:27:45.289286 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" May 12 13:27:45.289286 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" May 12 13:27:45.289286 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" May 12 13:27:45.289286 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 12 13:27:45.289286 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 12 13:27:45.289286 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 12 13:27:45.289286 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 12 13:27:45.289286 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" May 12 13:27:45.304436 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 12 13:27:45.304436 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 12 13:27:45.304436 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 12 13:27:45.304436 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 12 13:27:45.304436 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 May 12 13:27:45.601194 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK May 12 13:27:45.602079 systemd-networkd[798]: eth0: Gained IPv6LL May 12 13:27:45.850803 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 12 13:27:45.850803 ignition[977]: INFO : files: op(b): [started] processing unit "prepare-helm.service" May 12 13:27:45.854311 ignition[977]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 12 13:27:45.854311 ignition[977]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 12 13:27:45.854311 ignition[977]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" May 12 13:27:45.854311 ignition[977]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" May 12 13:27:45.854311 ignition[977]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 12 13:27:45.854311 ignition[977]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 12 13:27:45.854311 ignition[977]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" May 12 13:27:45.854311 ignition[977]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" May 12 13:27:45.869149 ignition[977]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" May 12 13:27:45.872214 ignition[977]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 12 13:27:45.874876 ignition[977]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" May 12 13:27:45.874876 ignition[977]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" May 12 13:27:45.874876 ignition[977]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" May 12 13:27:45.874876 ignition[977]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" May 12 13:27:45.874876 ignition[977]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" May 12 13:27:45.874876 ignition[977]: INFO : files: files passed May 12 13:27:45.874876 ignition[977]: INFO : Ignition finished successfully May 12 13:27:45.875748 systemd[1]: Finished ignition-files.service - Ignition (files). May 12 13:27:45.878228 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 12 13:27:45.880642 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 12 13:27:45.894117 systemd[1]: ignition-quench.service: Deactivated successfully. May 12 13:27:45.894215 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 12 13:27:45.897467 initrd-setup-root-after-ignition[1005]: grep: /sysroot/oem/oem-release: No such file or directory May 12 13:27:45.898788 initrd-setup-root-after-ignition[1007]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 12 13:27:45.898788 initrd-setup-root-after-ignition[1007]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 12 13:27:45.901772 initrd-setup-root-after-ignition[1011]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 12 13:27:45.901718 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 12 13:27:45.903094 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 12 13:27:45.906010 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 12 13:27:45.947052 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 12 13:27:45.947179 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 12 13:27:45.949456 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 12 13:27:45.951260 systemd[1]: Reached target initrd.target - Initrd Default Target. May 12 13:27:45.953035 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 12 13:27:45.953773 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 12 13:27:45.974074 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 12 13:27:45.976370 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 12 13:27:45.997725 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 12 13:27:45.998990 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 12 13:27:46.001076 systemd[1]: Stopped target timers.target - Timer Units. May 12 13:27:46.002915 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 12 13:27:46.003053 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 12 13:27:46.005570 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 12 13:27:46.007558 systemd[1]: Stopped target basic.target - Basic System. May 12 13:27:46.009184 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 12 13:27:46.010870 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 12 13:27:46.012996 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 12 13:27:46.014980 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. May 12 13:27:46.016913 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 12 13:27:46.018765 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 12 13:27:46.020713 systemd[1]: Stopped target sysinit.target - System Initialization. May 12 13:27:46.022658 systemd[1]: Stopped target local-fs.target - Local File Systems. May 12 13:27:46.024373 systemd[1]: Stopped target swap.target - Swaps. May 12 13:27:46.025838 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 12 13:27:46.025981 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 12 13:27:46.028280 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 12 13:27:46.030170 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 12 13:27:46.032085 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 12 13:27:46.032190 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 12 13:27:46.034149 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 12 13:27:46.034276 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 12 13:27:46.037096 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 12 13:27:46.037219 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 12 13:27:46.039129 systemd[1]: Stopped target paths.target - Path Units. May 12 13:27:46.040693 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 12 13:27:46.045007 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 12 13:27:46.046268 systemd[1]: Stopped target slices.target - Slice Units. May 12 13:27:46.048322 systemd[1]: Stopped target sockets.target - Socket Units. May 12 13:27:46.049851 systemd[1]: iscsid.socket: Deactivated successfully. May 12 13:27:46.049939 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 12 13:27:46.051527 systemd[1]: iscsiuio.socket: Deactivated successfully. May 12 13:27:46.051612 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 12 13:27:46.053169 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 12 13:27:46.053299 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 12 13:27:46.055110 systemd[1]: ignition-files.service: Deactivated successfully. May 12 13:27:46.055220 systemd[1]: Stopped ignition-files.service - Ignition (files). May 12 13:27:46.057585 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 12 13:27:46.060155 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 12 13:27:46.061366 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 12 13:27:46.061508 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 12 13:27:46.063301 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 12 13:27:46.063407 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 12 13:27:46.069142 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 12 13:27:46.069220 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 12 13:27:46.077976 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 12 13:27:46.082479 ignition[1032]: INFO : Ignition 2.21.0 May 12 13:27:46.082479 ignition[1032]: INFO : Stage: umount May 12 13:27:46.085439 ignition[1032]: INFO : no configs at "/usr/lib/ignition/base.d" May 12 13:27:46.085439 ignition[1032]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 12 13:27:46.087774 ignition[1032]: INFO : umount: umount passed May 12 13:27:46.089807 ignition[1032]: INFO : Ignition finished successfully May 12 13:27:46.090434 systemd[1]: ignition-mount.service: Deactivated successfully. May 12 13:27:46.090538 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 12 13:27:46.091794 systemd[1]: Stopped target network.target - Network. May 12 13:27:46.093588 systemd[1]: ignition-disks.service: Deactivated successfully. May 12 13:27:46.093646 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 12 13:27:46.095339 systemd[1]: ignition-kargs.service: Deactivated successfully. May 12 13:27:46.095385 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 12 13:27:46.096927 systemd[1]: ignition-setup.service: Deactivated successfully. May 12 13:27:46.096996 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 12 13:27:46.098772 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 12 13:27:46.098811 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 12 13:27:46.100664 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 12 13:27:46.102343 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 12 13:27:46.106379 systemd[1]: systemd-resolved.service: Deactivated successfully. May 12 13:27:46.106485 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 12 13:27:46.109898 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 12 13:27:46.110224 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 12 13:27:46.110276 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 12 13:27:46.113780 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 12 13:27:46.114007 systemd[1]: systemd-networkd.service: Deactivated successfully. May 12 13:27:46.114105 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 12 13:27:46.116994 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 12 13:27:46.117348 systemd[1]: Stopped target network-pre.target - Preparation for Network. May 12 13:27:46.118493 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 12 13:27:46.118533 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 12 13:27:46.121562 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 12 13:27:46.122944 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 12 13:27:46.123014 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 12 13:27:46.125209 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 12 13:27:46.125272 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 12 13:27:46.128247 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 12 13:27:46.128301 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 12 13:27:46.130533 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 12 13:27:46.133215 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 12 13:27:46.138101 systemd[1]: sysroot-boot.service: Deactivated successfully. May 12 13:27:46.138188 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 12 13:27:46.141304 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 12 13:27:46.141385 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 12 13:27:46.143517 systemd[1]: systemd-udevd.service: Deactivated successfully. May 12 13:27:46.143655 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 12 13:27:46.156289 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 12 13:27:46.156349 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 12 13:27:46.157475 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 12 13:27:46.157506 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 12 13:27:46.159326 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 12 13:27:46.159377 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 12 13:27:46.161991 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 12 13:27:46.162038 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 12 13:27:46.164687 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 12 13:27:46.164733 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 12 13:27:46.168318 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 12 13:27:46.169463 systemd[1]: systemd-network-generator.service: Deactivated successfully. May 12 13:27:46.169522 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. May 12 13:27:46.172623 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 12 13:27:46.172670 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 12 13:27:46.176004 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. May 12 13:27:46.176050 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 12 13:27:46.179616 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 12 13:27:46.179659 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 12 13:27:46.182050 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 12 13:27:46.182096 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 12 13:27:46.185994 systemd[1]: network-cleanup.service: Deactivated successfully. May 12 13:27:46.186080 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 12 13:27:46.187443 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 12 13:27:46.187511 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 12 13:27:46.190089 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 12 13:27:46.191901 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 12 13:27:46.210443 systemd[1]: Switching root. May 12 13:27:46.238132 systemd-journald[237]: Journal stopped May 12 13:27:46.972170 systemd-journald[237]: Received SIGTERM from PID 1 (systemd). May 12 13:27:46.972224 kernel: SELinux: policy capability network_peer_controls=1 May 12 13:27:46.972239 kernel: SELinux: policy capability open_perms=1 May 12 13:27:46.972260 kernel: SELinux: policy capability extended_socket_class=1 May 12 13:27:46.972271 kernel: SELinux: policy capability always_check_network=0 May 12 13:27:46.972280 kernel: SELinux: policy capability cgroup_seclabel=1 May 12 13:27:46.972289 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 12 13:27:46.972298 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 12 13:27:46.972307 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 12 13:27:46.972319 kernel: audit: type=1403 audit(1747056466.388:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 12 13:27:46.972332 systemd[1]: Successfully loaded SELinux policy in 37.446ms. May 12 13:27:46.972354 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 9.734ms. May 12 13:27:46.972365 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 12 13:27:46.972377 systemd[1]: Detected virtualization kvm. May 12 13:27:46.972387 systemd[1]: Detected architecture arm64. May 12 13:27:46.972398 systemd[1]: Detected first boot. May 12 13:27:46.972408 systemd[1]: Initializing machine ID from VM UUID. May 12 13:27:46.972418 zram_generator::config[1078]: No configuration found. May 12 13:27:46.972430 kernel: NET: Registered PF_VSOCK protocol family May 12 13:27:46.972439 systemd[1]: Populated /etc with preset unit settings. May 12 13:27:46.972466 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 12 13:27:46.972482 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 12 13:27:46.972494 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 12 13:27:46.972504 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 12 13:27:46.972514 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 12 13:27:46.972525 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 12 13:27:46.972535 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 12 13:27:46.972545 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 12 13:27:46.972555 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 12 13:27:46.972565 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 12 13:27:46.972577 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 12 13:27:46.972587 systemd[1]: Created slice user.slice - User and Session Slice. May 12 13:27:46.972597 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 12 13:27:46.972608 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 12 13:27:46.972619 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 12 13:27:46.972628 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 12 13:27:46.972639 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 12 13:27:46.972649 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 12 13:27:46.972659 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... May 12 13:27:46.972670 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 12 13:27:46.972680 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 12 13:27:46.972691 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 12 13:27:46.972701 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 12 13:27:46.972711 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 12 13:27:46.972722 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 12 13:27:46.972732 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 12 13:27:46.972742 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 12 13:27:46.972753 systemd[1]: Reached target slices.target - Slice Units. May 12 13:27:46.972763 systemd[1]: Reached target swap.target - Swaps. May 12 13:27:46.972773 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 12 13:27:46.972783 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 12 13:27:46.972793 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 12 13:27:46.972803 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 12 13:27:46.972813 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 12 13:27:46.972823 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 12 13:27:46.972833 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 12 13:27:46.972845 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 12 13:27:46.972855 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 12 13:27:46.972865 systemd[1]: Mounting media.mount - External Media Directory... May 12 13:27:46.972875 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 12 13:27:46.972885 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 12 13:27:46.972895 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 12 13:27:46.972906 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 12 13:27:46.972917 systemd[1]: Reached target machines.target - Containers. May 12 13:27:46.972928 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 12 13:27:46.972938 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 12 13:27:46.972966 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 12 13:27:46.972980 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 12 13:27:46.972991 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 12 13:27:46.973001 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 12 13:27:46.973011 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 12 13:27:46.973021 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 12 13:27:46.973031 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 12 13:27:46.973044 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 12 13:27:46.973054 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 12 13:27:46.973064 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 12 13:27:46.973075 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 12 13:27:46.973084 kernel: fuse: init (API version 7.39) May 12 13:27:46.973095 systemd[1]: Stopped systemd-fsck-usr.service. May 12 13:27:46.973105 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 12 13:27:46.973116 systemd[1]: Starting systemd-journald.service - Journal Service... May 12 13:27:46.973127 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 12 13:27:46.973137 kernel: loop: module loaded May 12 13:27:46.973146 kernel: ACPI: bus type drm_connector registered May 12 13:27:46.973156 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 12 13:27:46.973167 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 12 13:27:46.973177 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 12 13:27:46.973189 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 12 13:27:46.973199 systemd[1]: verity-setup.service: Deactivated successfully. May 12 13:27:46.973211 systemd[1]: Stopped verity-setup.service. May 12 13:27:46.973242 systemd-journald[1146]: Collecting audit messages is disabled. May 12 13:27:46.973270 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 12 13:27:46.973286 systemd-journald[1146]: Journal started May 12 13:27:46.973311 systemd-journald[1146]: Runtime Journal (/run/log/journal/d18733d2175e47fdb5a281d3e4ffaab7) is 5.9M, max 47.3M, 41.4M free. May 12 13:27:46.758574 systemd[1]: Queued start job for default target multi-user.target. May 12 13:27:46.770727 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 12 13:27:46.771110 systemd[1]: systemd-journald.service: Deactivated successfully. May 12 13:27:46.976832 systemd[1]: Started systemd-journald.service - Journal Service. May 12 13:27:46.977555 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 12 13:27:46.978800 systemd[1]: Mounted media.mount - External Media Directory. May 12 13:27:46.980017 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 12 13:27:46.981235 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 12 13:27:46.982484 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 12 13:27:46.984989 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 12 13:27:46.986392 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 12 13:27:46.987888 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 12 13:27:46.988105 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 12 13:27:46.989509 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 12 13:27:46.989662 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 12 13:27:46.991067 systemd[1]: modprobe@drm.service: Deactivated successfully. May 12 13:27:46.991232 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 12 13:27:46.992665 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 12 13:27:46.992826 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 12 13:27:46.994442 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 12 13:27:46.994600 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 12 13:27:46.995993 systemd[1]: modprobe@loop.service: Deactivated successfully. May 12 13:27:46.996135 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 12 13:27:46.997592 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 12 13:27:46.999012 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 12 13:27:47.000507 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 12 13:27:47.002118 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 12 13:27:47.014289 systemd[1]: Reached target network-pre.target - Preparation for Network. May 12 13:27:47.016673 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 12 13:27:47.018722 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 12 13:27:47.019964 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 12 13:27:47.019992 systemd[1]: Reached target local-fs.target - Local File Systems. May 12 13:27:47.021828 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 12 13:27:47.024788 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 12 13:27:47.026044 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 12 13:27:47.027006 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 12 13:27:47.028879 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 12 13:27:47.030121 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 12 13:27:47.031106 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 12 13:27:47.035092 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 12 13:27:47.036208 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 12 13:27:47.036712 systemd-journald[1146]: Time spent on flushing to /var/log/journal/d18733d2175e47fdb5a281d3e4ffaab7 is 11.499ms for 877 entries. May 12 13:27:47.036712 systemd-journald[1146]: System Journal (/var/log/journal/d18733d2175e47fdb5a281d3e4ffaab7) is 8M, max 195.6M, 187.6M free. May 12 13:27:47.064894 systemd-journald[1146]: Received client request to flush runtime journal. May 12 13:27:47.039141 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 12 13:27:47.051205 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 12 13:27:47.053908 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 12 13:27:47.055571 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 12 13:27:47.056968 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 12 13:27:47.065998 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 12 13:27:47.067918 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 12 13:27:47.070226 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 12 13:27:47.071194 kernel: loop0: detected capacity change from 0 to 138376 May 12 13:27:47.076203 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 12 13:27:47.077810 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 12 13:27:47.085104 systemd-tmpfiles[1196]: ACLs are not supported, ignoring. May 12 13:27:47.085124 systemd-tmpfiles[1196]: ACLs are not supported, ignoring. May 12 13:27:47.090468 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 12 13:27:47.093013 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 12 13:27:47.095455 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 12 13:27:47.112558 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 12 13:27:47.128850 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 12 13:27:47.133293 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 12 13:27:47.136986 kernel: loop1: detected capacity change from 0 to 107312 May 12 13:27:47.151918 systemd-tmpfiles[1215]: ACLs are not supported, ignoring. May 12 13:27:47.151936 systemd-tmpfiles[1215]: ACLs are not supported, ignoring. May 12 13:27:47.155762 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 12 13:27:47.178972 kernel: loop2: detected capacity change from 0 to 194096 May 12 13:27:47.224977 kernel: loop3: detected capacity change from 0 to 138376 May 12 13:27:47.230972 kernel: loop4: detected capacity change from 0 to 107312 May 12 13:27:47.235981 kernel: loop5: detected capacity change from 0 to 194096 May 12 13:27:47.240851 (sd-merge)[1220]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. May 12 13:27:47.241524 (sd-merge)[1220]: Merged extensions into '/usr'. May 12 13:27:47.244877 systemd[1]: Reload requested from client PID 1195 ('systemd-sysext') (unit systemd-sysext.service)... May 12 13:27:47.244894 systemd[1]: Reloading... May 12 13:27:47.297089 zram_generator::config[1247]: No configuration found. May 12 13:27:47.317215 ldconfig[1190]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 12 13:27:47.374286 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 12 13:27:47.436679 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 12 13:27:47.436935 systemd[1]: Reloading finished in 191 ms. May 12 13:27:47.458499 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 12 13:27:47.460084 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 12 13:27:47.472232 systemd[1]: Starting ensure-sysext.service... May 12 13:27:47.474033 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 12 13:27:47.487006 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 12 13:27:47.490252 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 12 13:27:47.494561 systemd[1]: Reload requested from client PID 1281 ('systemctl') (unit ensure-sysext.service)... May 12 13:27:47.494576 systemd[1]: Reloading... May 12 13:27:47.495962 systemd-tmpfiles[1282]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. May 12 13:27:47.496004 systemd-tmpfiles[1282]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. May 12 13:27:47.496226 systemd-tmpfiles[1282]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 12 13:27:47.496436 systemd-tmpfiles[1282]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 12 13:27:47.497110 systemd-tmpfiles[1282]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 12 13:27:47.497339 systemd-tmpfiles[1282]: ACLs are not supported, ignoring. May 12 13:27:47.497383 systemd-tmpfiles[1282]: ACLs are not supported, ignoring. May 12 13:27:47.499932 systemd-tmpfiles[1282]: Detected autofs mount point /boot during canonicalization of boot. May 12 13:27:47.499937 systemd-tmpfiles[1282]: Skipping /boot May 12 13:27:47.509613 systemd-tmpfiles[1282]: Detected autofs mount point /boot during canonicalization of boot. May 12 13:27:47.509629 systemd-tmpfiles[1282]: Skipping /boot May 12 13:27:47.527854 systemd-udevd[1285]: Using default interface naming scheme 'v255'. May 12 13:27:47.541122 zram_generator::config[1310]: No configuration found. May 12 13:27:47.600970 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 42 scanned by (udev-worker) (1343) May 12 13:27:47.647393 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 12 13:27:47.726385 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 12 13:27:47.727805 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. May 12 13:27:47.728219 systemd[1]: Reloading finished in 233 ms. May 12 13:27:47.741542 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 12 13:27:47.762335 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 12 13:27:47.783018 systemd[1]: Finished ensure-sysext.service. May 12 13:27:47.798971 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 12 13:27:47.801208 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 12 13:27:47.802554 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 12 13:27:47.809742 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 12 13:27:47.811898 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 12 13:27:47.814173 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 12 13:27:47.816168 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 12 13:27:47.817435 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 12 13:27:47.818575 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 12 13:27:47.820080 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 12 13:27:47.822132 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 12 13:27:47.826236 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 12 13:27:47.834662 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 12 13:27:47.838724 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 12 13:27:47.842192 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 12 13:27:47.844412 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 12 13:27:47.853520 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 12 13:27:47.853687 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 12 13:27:47.856368 systemd[1]: modprobe@drm.service: Deactivated successfully. May 12 13:27:47.856537 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 12 13:27:47.858056 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 12 13:27:47.859100 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 12 13:27:47.863510 systemd[1]: modprobe@loop.service: Deactivated successfully. May 12 13:27:47.864477 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 12 13:27:47.867764 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 12 13:27:47.869907 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 12 13:27:47.873910 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 12 13:27:47.881259 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 12 13:27:47.881397 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 12 13:27:47.882705 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 12 13:27:47.886022 augenrules[1436]: No rules May 12 13:27:47.886304 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 12 13:27:47.887336 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 12 13:27:47.891870 systemd[1]: audit-rules.service: Deactivated successfully. May 12 13:27:47.893643 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 12 13:27:47.895015 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 12 13:27:47.899292 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 12 13:27:47.902718 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 12 13:27:47.922440 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 12 13:27:47.987001 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 12 13:27:47.988358 systemd[1]: Reached target time-set.target - System Time Set. May 12 13:27:47.993724 systemd-resolved[1408]: Positive Trust Anchors: May 12 13:27:47.993744 systemd-resolved[1408]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 12 13:27:47.993776 systemd-resolved[1408]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 12 13:27:48.001918 systemd-resolved[1408]: Defaulting to hostname 'linux'. May 12 13:27:48.005985 systemd-networkd[1407]: lo: Link UP May 12 13:27:48.005995 systemd-networkd[1407]: lo: Gained carrier May 12 13:27:48.009961 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 12 13:27:48.010129 systemd-networkd[1407]: Enumeration completed May 12 13:27:48.010562 systemd-networkd[1407]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 12 13:27:48.010572 systemd-networkd[1407]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 12 13:27:48.011102 systemd-networkd[1407]: eth0: Link UP May 12 13:27:48.011111 systemd-networkd[1407]: eth0: Gained carrier May 12 13:27:48.011125 systemd-networkd[1407]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 12 13:27:48.011252 systemd[1]: Started systemd-networkd.service - Network Configuration. May 12 13:27:48.012610 systemd[1]: Reached target network.target - Network. May 12 13:27:48.013599 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 12 13:27:48.014846 systemd[1]: Reached target sysinit.target - System Initialization. May 12 13:27:48.015980 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 12 13:27:48.017192 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 12 13:27:48.018579 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 12 13:27:48.019749 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 12 13:27:48.021004 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 12 13:27:48.022398 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 12 13:27:48.022429 systemd[1]: Reached target paths.target - Path Units. May 12 13:27:48.023454 systemd[1]: Reached target timers.target - Timer Units. May 12 13:27:48.025285 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 12 13:27:48.027678 systemd[1]: Starting docker.socket - Docker Socket for the API... May 12 13:27:48.030778 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 12 13:27:48.032196 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 12 13:27:48.033444 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 12 13:27:48.033990 systemd-networkd[1407]: eth0: DHCPv4 address 10.0.0.76/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 12 13:27:48.036489 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 12 13:27:48.037888 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 12 13:27:48.038036 systemd-timesyncd[1412]: Network configuration changed, trying to establish connection. May 12 13:27:48.039307 systemd-timesyncd[1412]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 12 13:27:48.039349 systemd-timesyncd[1412]: Initial clock synchronization to Mon 2025-05-12 13:27:47.729791 UTC. May 12 13:27:48.040358 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 12 13:27:48.042410 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 12 13:27:48.044071 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 12 13:27:48.045318 systemd[1]: Reached target sockets.target - Socket Units. May 12 13:27:48.046328 systemd[1]: Reached target basic.target - Basic System. May 12 13:27:48.047373 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 12 13:27:48.047406 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 12 13:27:48.048294 systemd[1]: Starting containerd.service - containerd container runtime... May 12 13:27:48.052097 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 12 13:27:48.054433 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 12 13:27:48.056520 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 12 13:27:48.058635 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 12 13:27:48.059794 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 12 13:27:48.061086 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 12 13:27:48.063196 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 12 13:27:48.066047 jq[1464]: false May 12 13:27:48.067228 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 12 13:27:48.070082 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 12 13:27:48.075100 systemd[1]: Starting systemd-logind.service - User Login Management... May 12 13:27:48.076514 extend-filesystems[1465]: Found loop3 May 12 13:27:48.077450 extend-filesystems[1465]: Found loop4 May 12 13:27:48.077450 extend-filesystems[1465]: Found loop5 May 12 13:27:48.077450 extend-filesystems[1465]: Found vda May 12 13:27:48.077450 extend-filesystems[1465]: Found vda1 May 12 13:27:48.077450 extend-filesystems[1465]: Found vda2 May 12 13:27:48.077450 extend-filesystems[1465]: Found vda3 May 12 13:27:48.077450 extend-filesystems[1465]: Found usr May 12 13:27:48.077450 extend-filesystems[1465]: Found vda4 May 12 13:27:48.077450 extend-filesystems[1465]: Found vda6 May 12 13:27:48.077450 extend-filesystems[1465]: Found vda7 May 12 13:27:48.077450 extend-filesystems[1465]: Found vda9 May 12 13:27:48.077450 extend-filesystems[1465]: Checking size of /dev/vda9 May 12 13:27:48.077060 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 12 13:27:48.105113 extend-filesystems[1465]: Resized partition /dev/vda9 May 12 13:27:48.077490 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 12 13:27:48.081099 systemd[1]: Starting update-engine.service - Update Engine... May 12 13:27:48.084185 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 12 13:27:48.108636 jq[1482]: true May 12 13:27:48.086376 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 12 13:27:48.089915 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 12 13:27:48.093675 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 12 13:27:48.093862 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 12 13:27:48.094131 systemd[1]: motdgen.service: Deactivated successfully. May 12 13:27:48.094292 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 12 13:27:48.097394 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 12 13:27:48.097563 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 12 13:27:48.109291 extend-filesystems[1488]: resize2fs 1.47.2 (1-Jan-2025) May 12 13:27:48.126469 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 42 scanned by (udev-worker) (1321) May 12 13:27:48.129866 jq[1489]: true May 12 13:27:48.129289 (ntainerd)[1500]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 12 13:27:48.144050 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 12 13:27:48.148524 tar[1487]: linux-arm64/helm May 12 13:27:48.156041 update_engine[1478]: I20250512 13:27:48.153674 1478 main.cc:92] Flatcar Update Engine starting May 12 13:27:48.153835 systemd-logind[1472]: Watching system buttons on /dev/input/event0 (Power Button) May 12 13:27:48.156153 systemd-logind[1472]: New seat seat0. May 12 13:27:48.157347 systemd[1]: Started systemd-logind.service - User Login Management. May 12 13:27:48.170379 dbus-daemon[1462]: [system] SELinux support is enabled May 12 13:27:48.171978 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 12 13:27:48.172074 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 12 13:27:48.177467 update_engine[1478]: I20250512 13:27:48.176368 1478 update_check_scheduler.cc:74] Next update check in 9m24s May 12 13:27:48.177321 dbus-daemon[1462]: [system] Successfully activated service 'org.freedesktop.systemd1' May 12 13:27:48.176508 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 12 13:27:48.176547 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 12 13:27:48.179039 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 12 13:27:48.179066 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 12 13:27:48.184691 systemd[1]: Started update-engine.service - Update Engine. May 12 13:27:48.187883 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 12 13:27:48.192417 extend-filesystems[1488]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 12 13:27:48.192417 extend-filesystems[1488]: old_desc_blocks = 1, new_desc_blocks = 1 May 12 13:27:48.192417 extend-filesystems[1488]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 12 13:27:48.196402 extend-filesystems[1465]: Resized filesystem in /dev/vda9 May 12 13:27:48.203452 systemd[1]: extend-filesystems.service: Deactivated successfully. May 12 13:27:48.203671 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 12 13:27:48.209609 bash[1518]: Updated "/home/core/.ssh/authorized_keys" May 12 13:27:48.213904 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 12 13:27:48.216579 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 12 13:27:48.259526 locksmithd[1519]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 12 13:27:48.374636 containerd[1500]: time="2025-05-12T13:27:48Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 May 12 13:27:48.375717 containerd[1500]: time="2025-05-12T13:27:48.375605480Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 May 12 13:27:48.387235 containerd[1500]: time="2025-05-12T13:27:48.387201360Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="52.72µs" May 12 13:27:48.387900 containerd[1500]: time="2025-05-12T13:27:48.387310840Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 May 12 13:27:48.387900 containerd[1500]: time="2025-05-12T13:27:48.387334720Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 May 12 13:27:48.387900 containerd[1500]: time="2025-05-12T13:27:48.387532960Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 May 12 13:27:48.387900 containerd[1500]: time="2025-05-12T13:27:48.387551080Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 May 12 13:27:48.387900 containerd[1500]: time="2025-05-12T13:27:48.387573640Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 12 13:27:48.387900 containerd[1500]: time="2025-05-12T13:27:48.387622280Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 12 13:27:48.387900 containerd[1500]: time="2025-05-12T13:27:48.387632840Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 12 13:27:48.387900 containerd[1500]: time="2025-05-12T13:27:48.387817160Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 12 13:27:48.387900 containerd[1500]: time="2025-05-12T13:27:48.387831360Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 12 13:27:48.387900 containerd[1500]: time="2025-05-12T13:27:48.387841200Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 12 13:27:48.387900 containerd[1500]: time="2025-05-12T13:27:48.387849280Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 May 12 13:27:48.388162 containerd[1500]: time="2025-05-12T13:27:48.387919200Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 May 12 13:27:48.390199 containerd[1500]: time="2025-05-12T13:27:48.388316680Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 12 13:27:48.390199 containerd[1500]: time="2025-05-12T13:27:48.388357960Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 12 13:27:48.390199 containerd[1500]: time="2025-05-12T13:27:48.388370160Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 May 12 13:27:48.390199 containerd[1500]: time="2025-05-12T13:27:48.389078080Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 May 12 13:27:48.390199 containerd[1500]: time="2025-05-12T13:27:48.389429520Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 May 12 13:27:48.390199 containerd[1500]: time="2025-05-12T13:27:48.389501280Z" level=info msg="metadata content store policy set" policy=shared May 12 13:27:48.393212 containerd[1500]: time="2025-05-12T13:27:48.393181040Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 May 12 13:27:48.393271 containerd[1500]: time="2025-05-12T13:27:48.393227640Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 May 12 13:27:48.393271 containerd[1500]: time="2025-05-12T13:27:48.393249400Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 May 12 13:27:48.393271 containerd[1500]: time="2025-05-12T13:27:48.393263680Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 May 12 13:27:48.393340 containerd[1500]: time="2025-05-12T13:27:48.393278800Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 May 12 13:27:48.393559 containerd[1500]: time="2025-05-12T13:27:48.393530720Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 May 12 13:27:48.393592 containerd[1500]: time="2025-05-12T13:27:48.393565240Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 May 12 13:27:48.393592 containerd[1500]: time="2025-05-12T13:27:48.393583960Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 May 12 13:27:48.393630 containerd[1500]: time="2025-05-12T13:27:48.393596560Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 May 12 13:27:48.393630 containerd[1500]: time="2025-05-12T13:27:48.393612120Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 May 12 13:27:48.393630 containerd[1500]: time="2025-05-12T13:27:48.393625320Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 May 12 13:27:48.393683 containerd[1500]: time="2025-05-12T13:27:48.393642680Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 May 12 13:27:48.393777 containerd[1500]: time="2025-05-12T13:27:48.393752880Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 May 12 13:27:48.393805 containerd[1500]: time="2025-05-12T13:27:48.393782360Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 May 12 13:27:48.393823 containerd[1500]: time="2025-05-12T13:27:48.393803280Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 May 12 13:27:48.393823 containerd[1500]: time="2025-05-12T13:27:48.393818600Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 May 12 13:27:48.393856 containerd[1500]: time="2025-05-12T13:27:48.393832600Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 May 12 13:27:48.393856 containerd[1500]: time="2025-05-12T13:27:48.393843440Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 May 12 13:27:48.393895 containerd[1500]: time="2025-05-12T13:27:48.393857960Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 May 12 13:27:48.393895 containerd[1500]: time="2025-05-12T13:27:48.393871800Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 May 12 13:27:48.393895 containerd[1500]: time="2025-05-12T13:27:48.393886400Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 May 12 13:27:48.393967 containerd[1500]: time="2025-05-12T13:27:48.393897920Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 May 12 13:27:48.393967 containerd[1500]: time="2025-05-12T13:27:48.393912080Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 May 12 13:27:48.394133 containerd[1500]: time="2025-05-12T13:27:48.394113160Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" May 12 13:27:48.394559 containerd[1500]: time="2025-05-12T13:27:48.394533480Z" level=info msg="Start snapshots syncer" May 12 13:27:48.394602 containerd[1500]: time="2025-05-12T13:27:48.394573160Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 May 12 13:27:48.395755 containerd[1500]: time="2025-05-12T13:27:48.395654640Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" May 12 13:27:48.395879 containerd[1500]: time="2025-05-12T13:27:48.395769840Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 May 12 13:27:48.395902 containerd[1500]: time="2025-05-12T13:27:48.395878800Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 May 12 13:27:48.396137 containerd[1500]: time="2025-05-12T13:27:48.396105360Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 May 12 13:27:48.396176 containerd[1500]: time="2025-05-12T13:27:48.396155040Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 May 12 13:27:48.396200 containerd[1500]: time="2025-05-12T13:27:48.396179840Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 May 12 13:27:48.396200 containerd[1500]: time="2025-05-12T13:27:48.396196640Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 May 12 13:27:48.396253 containerd[1500]: time="2025-05-12T13:27:48.396213080Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 May 12 13:27:48.396253 containerd[1500]: time="2025-05-12T13:27:48.396224120Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 May 12 13:27:48.396289 containerd[1500]: time="2025-05-12T13:27:48.396237880Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 May 12 13:27:48.396320 containerd[1500]: time="2025-05-12T13:27:48.396302720Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 May 12 13:27:48.396344 containerd[1500]: time="2025-05-12T13:27:48.396336280Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 May 12 13:27:48.396677 containerd[1500]: time="2025-05-12T13:27:48.396651480Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 May 12 13:27:48.396732 containerd[1500]: time="2025-05-12T13:27:48.396711800Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 12 13:27:48.396758 containerd[1500]: time="2025-05-12T13:27:48.396737800Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 12 13:27:48.396758 containerd[1500]: time="2025-05-12T13:27:48.396752520Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 12 13:27:48.396793 containerd[1500]: time="2025-05-12T13:27:48.396766040Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 12 13:27:48.396793 containerd[1500]: time="2025-05-12T13:27:48.396774800Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 May 12 13:27:48.396793 containerd[1500]: time="2025-05-12T13:27:48.396788120Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 May 12 13:27:48.396845 containerd[1500]: time="2025-05-12T13:27:48.396802200Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 May 12 13:27:48.396896 containerd[1500]: time="2025-05-12T13:27:48.396879520Z" level=info msg="runtime interface created" May 12 13:27:48.396896 containerd[1500]: time="2025-05-12T13:27:48.396891080Z" level=info msg="created NRI interface" May 12 13:27:48.396939 containerd[1500]: time="2025-05-12T13:27:48.396900160Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 May 12 13:27:48.396939 containerd[1500]: time="2025-05-12T13:27:48.396914840Z" level=info msg="Connect containerd service" May 12 13:27:48.396986 containerd[1500]: time="2025-05-12T13:27:48.396968280Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 12 13:27:48.401169 containerd[1500]: time="2025-05-12T13:27:48.401136440Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 12 13:27:48.516125 containerd[1500]: time="2025-05-12T13:27:48.516023800Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 12 13:27:48.516125 containerd[1500]: time="2025-05-12T13:27:48.516089840Z" level=info msg=serving... address=/run/containerd/containerd.sock May 12 13:27:48.516125 containerd[1500]: time="2025-05-12T13:27:48.516115600Z" level=info msg="Start subscribing containerd event" May 12 13:27:48.516254 containerd[1500]: time="2025-05-12T13:27:48.516150000Z" level=info msg="Start recovering state" May 12 13:27:48.516254 containerd[1500]: time="2025-05-12T13:27:48.516224240Z" level=info msg="Start event monitor" May 12 13:27:48.516254 containerd[1500]: time="2025-05-12T13:27:48.516238080Z" level=info msg="Start cni network conf syncer for default" May 12 13:27:48.516304 containerd[1500]: time="2025-05-12T13:27:48.516255720Z" level=info msg="Start streaming server" May 12 13:27:48.516304 containerd[1500]: time="2025-05-12T13:27:48.516264160Z" level=info msg="Registered namespace \"k8s.io\" with NRI" May 12 13:27:48.516304 containerd[1500]: time="2025-05-12T13:27:48.516270600Z" level=info msg="runtime interface starting up..." May 12 13:27:48.516304 containerd[1500]: time="2025-05-12T13:27:48.516276360Z" level=info msg="starting plugins..." May 12 13:27:48.516304 containerd[1500]: time="2025-05-12T13:27:48.516290640Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" May 12 13:27:48.518513 containerd[1500]: time="2025-05-12T13:27:48.516408680Z" level=info msg="containerd successfully booted in 0.142097s" May 12 13:27:48.516512 systemd[1]: Started containerd.service - containerd container runtime. May 12 13:27:48.526669 tar[1487]: linux-arm64/LICENSE May 12 13:27:48.526669 tar[1487]: linux-arm64/README.md May 12 13:27:48.541991 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 12 13:27:48.944424 sshd_keygen[1484]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 12 13:27:48.964998 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 12 13:27:48.967805 systemd[1]: Starting issuegen.service - Generate /run/issue... May 12 13:27:48.983174 systemd[1]: issuegen.service: Deactivated successfully. May 12 13:27:48.984013 systemd[1]: Finished issuegen.service - Generate /run/issue. May 12 13:27:48.986691 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 12 13:27:49.005013 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 12 13:27:49.008660 systemd[1]: Started getty@tty1.service - Getty on tty1. May 12 13:27:49.010666 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. May 12 13:27:49.012034 systemd[1]: Reached target getty.target - Login Prompts. May 12 13:27:50.018061 systemd-networkd[1407]: eth0: Gained IPv6LL May 12 13:27:50.020389 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 12 13:27:50.022088 systemd[1]: Reached target network-online.target - Network is Online. May 12 13:27:50.025374 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... May 12 13:27:50.027500 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 12 13:27:50.029530 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 12 13:27:50.066287 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 12 13:27:50.068116 systemd[1]: coreos-metadata.service: Deactivated successfully. May 12 13:27:50.068529 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. May 12 13:27:50.071318 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 12 13:27:50.503341 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 12 13:27:50.504856 systemd[1]: Reached target multi-user.target - Multi-User System. May 12 13:27:50.506709 (kubelet)[1591]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 12 13:27:50.510903 systemd[1]: Startup finished in 2.191s (kernel) + 4.662s (initrd) + 4.167s (userspace) = 11.020s. May 12 13:27:50.954084 kubelet[1591]: E0512 13:27:50.953981 1591 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 12 13:27:50.956682 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 12 13:27:50.956822 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 12 13:27:50.957157 systemd[1]: kubelet.service: Consumed 821ms CPU time, 242.4M memory peak. May 12 13:27:54.792279 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 12 13:27:54.793367 systemd[1]: Started sshd@0-10.0.0.76:22-10.0.0.1:38254.service - OpenSSH per-connection server daemon (10.0.0.1:38254). May 12 13:27:54.858907 sshd[1606]: Accepted publickey for core from 10.0.0.1 port 38254 ssh2: RSA SHA256:jEPoW5jmVqQGUqKP3XswdpHQkuwhsPJWJAB8YbEjhZ8 May 12 13:27:54.860522 sshd-session[1606]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 12 13:27:54.873260 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 12 13:27:54.874337 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 12 13:27:54.879687 systemd-logind[1472]: New session 1 of user core. May 12 13:27:54.895981 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 12 13:27:54.898390 systemd[1]: Starting user@500.service - User Manager for UID 500... May 12 13:27:54.916661 (systemd)[1610]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 12 13:27:54.918967 systemd-logind[1472]: New session c1 of user core. May 12 13:27:55.028349 systemd[1610]: Queued start job for default target default.target. May 12 13:27:55.040904 systemd[1610]: Created slice app.slice - User Application Slice. May 12 13:27:55.040943 systemd[1610]: Reached target paths.target - Paths. May 12 13:27:55.041003 systemd[1610]: Reached target timers.target - Timers. May 12 13:27:55.042227 systemd[1610]: Starting dbus.socket - D-Bus User Message Bus Socket... May 12 13:27:55.051833 systemd[1610]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 12 13:27:55.051904 systemd[1610]: Reached target sockets.target - Sockets. May 12 13:27:55.051965 systemd[1610]: Reached target basic.target - Basic System. May 12 13:27:55.051999 systemd[1610]: Reached target default.target - Main User Target. May 12 13:27:55.052025 systemd[1610]: Startup finished in 127ms. May 12 13:27:55.052238 systemd[1]: Started user@500.service - User Manager for UID 500. May 12 13:27:55.066115 systemd[1]: Started session-1.scope - Session 1 of User core. May 12 13:27:55.126639 systemd[1]: Started sshd@1-10.0.0.76:22-10.0.0.1:38262.service - OpenSSH per-connection server daemon (10.0.0.1:38262). May 12 13:27:55.184139 sshd[1621]: Accepted publickey for core from 10.0.0.1 port 38262 ssh2: RSA SHA256:jEPoW5jmVqQGUqKP3XswdpHQkuwhsPJWJAB8YbEjhZ8 May 12 13:27:55.185365 sshd-session[1621]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 12 13:27:55.189306 systemd-logind[1472]: New session 2 of user core. May 12 13:27:55.204105 systemd[1]: Started session-2.scope - Session 2 of User core. May 12 13:27:55.254281 sshd[1623]: Connection closed by 10.0.0.1 port 38262 May 12 13:27:55.254666 sshd-session[1621]: pam_unix(sshd:session): session closed for user core May 12 13:27:55.264058 systemd[1]: sshd@1-10.0.0.76:22-10.0.0.1:38262.service: Deactivated successfully. May 12 13:27:55.265505 systemd[1]: session-2.scope: Deactivated successfully. May 12 13:27:55.268154 systemd-logind[1472]: Session 2 logged out. Waiting for processes to exit. May 12 13:27:55.268962 systemd[1]: Started sshd@2-10.0.0.76:22-10.0.0.1:38274.service - OpenSSH per-connection server daemon (10.0.0.1:38274). May 12 13:27:55.269796 systemd-logind[1472]: Removed session 2. May 12 13:27:55.318303 sshd[1628]: Accepted publickey for core from 10.0.0.1 port 38274 ssh2: RSA SHA256:jEPoW5jmVqQGUqKP3XswdpHQkuwhsPJWJAB8YbEjhZ8 May 12 13:27:55.319481 sshd-session[1628]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 12 13:27:55.324004 systemd-logind[1472]: New session 3 of user core. May 12 13:27:55.335324 systemd[1]: Started session-3.scope - Session 3 of User core. May 12 13:27:55.383764 sshd[1631]: Connection closed by 10.0.0.1 port 38274 May 12 13:27:55.384159 sshd-session[1628]: pam_unix(sshd:session): session closed for user core May 12 13:27:55.399121 systemd[1]: sshd@2-10.0.0.76:22-10.0.0.1:38274.service: Deactivated successfully. May 12 13:27:55.400730 systemd[1]: session-3.scope: Deactivated successfully. May 12 13:27:55.403098 systemd-logind[1472]: Session 3 logged out. Waiting for processes to exit. May 12 13:27:55.403462 systemd[1]: Started sshd@3-10.0.0.76:22-10.0.0.1:38288.service - OpenSSH per-connection server daemon (10.0.0.1:38288). May 12 13:27:55.404754 systemd-logind[1472]: Removed session 3. May 12 13:27:55.454060 sshd[1636]: Accepted publickey for core from 10.0.0.1 port 38288 ssh2: RSA SHA256:jEPoW5jmVqQGUqKP3XswdpHQkuwhsPJWJAB8YbEjhZ8 May 12 13:27:55.455388 sshd-session[1636]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 12 13:27:55.459984 systemd-logind[1472]: New session 4 of user core. May 12 13:27:55.472117 systemd[1]: Started session-4.scope - Session 4 of User core. May 12 13:27:55.522493 sshd[1639]: Connection closed by 10.0.0.1 port 38288 May 12 13:27:55.522806 sshd-session[1636]: pam_unix(sshd:session): session closed for user core May 12 13:27:55.538445 systemd-logind[1472]: Session 4 logged out. Waiting for processes to exit. May 12 13:27:55.538535 systemd[1]: sshd@3-10.0.0.76:22-10.0.0.1:38288.service: Deactivated successfully. May 12 13:27:55.539881 systemd[1]: session-4.scope: Deactivated successfully. May 12 13:27:55.543058 systemd[1]: Started sshd@4-10.0.0.76:22-10.0.0.1:38300.service - OpenSSH per-connection server daemon (10.0.0.1:38300). May 12 13:27:55.543619 systemd-logind[1472]: Removed session 4. May 12 13:27:55.595214 sshd[1644]: Accepted publickey for core from 10.0.0.1 port 38300 ssh2: RSA SHA256:jEPoW5jmVqQGUqKP3XswdpHQkuwhsPJWJAB8YbEjhZ8 May 12 13:27:55.596325 sshd-session[1644]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 12 13:27:55.601022 systemd-logind[1472]: New session 5 of user core. May 12 13:27:55.610125 systemd[1]: Started session-5.scope - Session 5 of User core. May 12 13:27:55.668674 sudo[1648]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 12 13:27:55.668968 sudo[1648]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 12 13:27:55.683856 sudo[1648]: pam_unix(sudo:session): session closed for user root May 12 13:27:55.685277 sshd[1647]: Connection closed by 10.0.0.1 port 38300 May 12 13:27:55.685692 sshd-session[1644]: pam_unix(sshd:session): session closed for user core May 12 13:27:55.700227 systemd[1]: sshd@4-10.0.0.76:22-10.0.0.1:38300.service: Deactivated successfully. May 12 13:27:55.701774 systemd[1]: session-5.scope: Deactivated successfully. May 12 13:27:55.703475 systemd-logind[1472]: Session 5 logged out. Waiting for processes to exit. May 12 13:27:55.705183 systemd[1]: Started sshd@5-10.0.0.76:22-10.0.0.1:38314.service - OpenSSH per-connection server daemon (10.0.0.1:38314). May 12 13:27:55.705988 systemd-logind[1472]: Removed session 5. May 12 13:27:55.757850 sshd[1653]: Accepted publickey for core from 10.0.0.1 port 38314 ssh2: RSA SHA256:jEPoW5jmVqQGUqKP3XswdpHQkuwhsPJWJAB8YbEjhZ8 May 12 13:27:55.759208 sshd-session[1653]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 12 13:27:55.763907 systemd-logind[1472]: New session 6 of user core. May 12 13:27:55.777118 systemd[1]: Started session-6.scope - Session 6 of User core. May 12 13:27:55.829874 sudo[1658]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 12 13:27:55.830178 sudo[1658]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 12 13:27:55.833326 sudo[1658]: pam_unix(sudo:session): session closed for user root May 12 13:27:55.838097 sudo[1657]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 12 13:27:55.838387 sudo[1657]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 12 13:27:55.846847 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 12 13:27:55.882744 augenrules[1680]: No rules May 12 13:27:55.884055 systemd[1]: audit-rules.service: Deactivated successfully. May 12 13:27:55.884284 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 12 13:27:55.885597 sudo[1657]: pam_unix(sudo:session): session closed for user root May 12 13:27:55.886967 sshd[1656]: Connection closed by 10.0.0.1 port 38314 May 12 13:27:55.887388 sshd-session[1653]: pam_unix(sshd:session): session closed for user core May 12 13:27:55.908217 systemd[1]: sshd@5-10.0.0.76:22-10.0.0.1:38314.service: Deactivated successfully. May 12 13:27:55.909809 systemd[1]: session-6.scope: Deactivated successfully. May 12 13:27:55.911174 systemd-logind[1472]: Session 6 logged out. Waiting for processes to exit. May 12 13:27:55.912380 systemd[1]: Started sshd@6-10.0.0.76:22-10.0.0.1:38316.service - OpenSSH per-connection server daemon (10.0.0.1:38316). May 12 13:27:55.913180 systemd-logind[1472]: Removed session 6. May 12 13:27:55.952366 sshd[1688]: Accepted publickey for core from 10.0.0.1 port 38316 ssh2: RSA SHA256:jEPoW5jmVqQGUqKP3XswdpHQkuwhsPJWJAB8YbEjhZ8 May 12 13:27:55.953604 sshd-session[1688]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 12 13:27:55.958004 systemd-logind[1472]: New session 7 of user core. May 12 13:27:55.970134 systemd[1]: Started session-7.scope - Session 7 of User core. May 12 13:27:56.019615 sudo[1692]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 12 13:27:56.019884 sudo[1692]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 12 13:27:56.365346 systemd[1]: Starting docker.service - Docker Application Container Engine... May 12 13:27:56.378255 (dockerd)[1714]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 12 13:27:56.653235 dockerd[1714]: time="2025-05-12T13:27:56.653116406Z" level=info msg="Starting up" May 12 13:27:56.654722 dockerd[1714]: time="2025-05-12T13:27:56.654694426Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" May 12 13:27:56.842687 dockerd[1714]: time="2025-05-12T13:27:56.842412300Z" level=info msg="Loading containers: start." May 12 13:27:56.849972 kernel: Initializing XFRM netlink socket May 12 13:27:57.040283 systemd-networkd[1407]: docker0: Link UP May 12 13:27:57.043974 dockerd[1714]: time="2025-05-12T13:27:57.043854639Z" level=info msg="Loading containers: done." May 12 13:27:57.058498 dockerd[1714]: time="2025-05-12T13:27:57.058145729Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 12 13:27:57.058498 dockerd[1714]: time="2025-05-12T13:27:57.058232480Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 May 12 13:27:57.058498 dockerd[1714]: time="2025-05-12T13:27:57.058330835Z" level=info msg="Initializing buildkit" May 12 13:27:57.081876 dockerd[1714]: time="2025-05-12T13:27:57.081834743Z" level=info msg="Completed buildkit initialization" May 12 13:27:57.088656 dockerd[1714]: time="2025-05-12T13:27:57.088614062Z" level=info msg="Daemon has completed initialization" May 12 13:27:57.088877 dockerd[1714]: time="2025-05-12T13:27:57.088826322Z" level=info msg="API listen on /run/docker.sock" May 12 13:27:57.088869 systemd[1]: Started docker.service - Docker Application Container Engine. May 12 13:27:57.800042 containerd[1500]: time="2025-05-12T13:27:57.800002701Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\"" May 12 13:27:58.458963 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount415871920.mount: Deactivated successfully. May 12 13:27:59.519240 containerd[1500]: time="2025-05-12T13:27:59.519073420Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 13:27:59.520049 containerd[1500]: time="2025-05-12T13:27:59.519802484Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.12: active requests=0, bytes read=29794152" May 12 13:27:59.520725 containerd[1500]: time="2025-05-12T13:27:59.520692692Z" level=info msg="ImageCreate event name:\"sha256:afbe230ec4abc2c9e87f7fbe7814bde21dbe30f03252c8861c4ca9510cb43ec6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 13:27:59.522999 containerd[1500]: time="2025-05-12T13:27:59.522969563Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 13:27:59.524069 containerd[1500]: time="2025-05-12T13:27:59.524030892Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.12\" with image id \"sha256:afbe230ec4abc2c9e87f7fbe7814bde21dbe30f03252c8861c4ca9510cb43ec6\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\", size \"29790950\" in 1.723982645s" May 12 13:27:59.524122 containerd[1500]: time="2025-05-12T13:27:59.524072900Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\" returns image reference \"sha256:afbe230ec4abc2c9e87f7fbe7814bde21dbe30f03252c8861c4ca9510cb43ec6\"" May 12 13:27:59.539679 containerd[1500]: time="2025-05-12T13:27:59.539634947Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\"" May 12 13:28:01.073268 containerd[1500]: time="2025-05-12T13:28:01.073204232Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 13:28:01.074675 containerd[1500]: time="2025-05-12T13:28:01.074612851Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.12: active requests=0, bytes read=26855552" May 12 13:28:01.076571 containerd[1500]: time="2025-05-12T13:28:01.075525445Z" level=info msg="ImageCreate event name:\"sha256:3df23260c56ff58d759f8a841c67846184e97ce81a269549ca8d14b36da14c14\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 13:28:01.077885 containerd[1500]: time="2025-05-12T13:28:01.077857016Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 13:28:01.078790 containerd[1500]: time="2025-05-12T13:28:01.078736430Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.12\" with image id \"sha256:3df23260c56ff58d759f8a841c67846184e97ce81a269549ca8d14b36da14c14\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\", size \"28297111\" in 1.539059258s" May 12 13:28:01.078897 containerd[1500]: time="2025-05-12T13:28:01.078881888Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\" returns image reference \"sha256:3df23260c56ff58d759f8a841c67846184e97ce81a269549ca8d14b36da14c14\"" May 12 13:28:01.094476 containerd[1500]: time="2025-05-12T13:28:01.094439840Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\"" May 12 13:28:01.207223 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 12 13:28:01.208615 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 12 13:28:01.352237 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 12 13:28:01.355597 (kubelet)[2017]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 12 13:28:01.396691 kubelet[2017]: E0512 13:28:01.396631 2017 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 12 13:28:01.399535 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 12 13:28:01.399672 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 12 13:28:01.400148 systemd[1]: kubelet.service: Consumed 138ms CPU time, 97.5M memory peak. May 12 13:28:02.083361 containerd[1500]: time="2025-05-12T13:28:02.083319158Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 13:28:02.084247 containerd[1500]: time="2025-05-12T13:28:02.084022927Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.12: active requests=0, bytes read=16263947" May 12 13:28:02.084978 containerd[1500]: time="2025-05-12T13:28:02.084937847Z" level=info msg="ImageCreate event name:\"sha256:fb0f5dac5fa74463b801d11598454c00462609b582d17052195012e5f682c2ba\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 13:28:02.087657 containerd[1500]: time="2025-05-12T13:28:02.087599974Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 13:28:02.089921 containerd[1500]: time="2025-05-12T13:28:02.089222477Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.12\" with image id \"sha256:fb0f5dac5fa74463b801d11598454c00462609b582d17052195012e5f682c2ba\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\", size \"17705524\" in 994.744845ms" May 12 13:28:02.089921 containerd[1500]: time="2025-05-12T13:28:02.089265184Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\" returns image reference \"sha256:fb0f5dac5fa74463b801d11598454c00462609b582d17052195012e5f682c2ba\"" May 12 13:28:02.104501 containerd[1500]: time="2025-05-12T13:28:02.104465248Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\"" May 12 13:28:03.275395 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount478897000.mount: Deactivated successfully. May 12 13:28:03.476011 containerd[1500]: time="2025-05-12T13:28:03.475938883Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 13:28:03.476618 containerd[1500]: time="2025-05-12T13:28:03.476573276Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.12: active requests=0, bytes read=25775707" May 12 13:28:03.477435 containerd[1500]: time="2025-05-12T13:28:03.477400472Z" level=info msg="ImageCreate event name:\"sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 13:28:03.478921 containerd[1500]: time="2025-05-12T13:28:03.478873552Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 13:28:03.479384 containerd[1500]: time="2025-05-12T13:28:03.479346514Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.12\" with image id \"sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed\", repo tag \"registry.k8s.io/kube-proxy:v1.30.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\", size \"25774724\" in 1.374843199s" May 12 13:28:03.479437 containerd[1500]: time="2025-05-12T13:28:03.479385122Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\" returns image reference \"sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed\"" May 12 13:28:03.494489 containerd[1500]: time="2025-05-12T13:28:03.494459978Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" May 12 13:28:04.062617 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3358386644.mount: Deactivated successfully. May 12 13:28:04.629047 containerd[1500]: time="2025-05-12T13:28:04.628992219Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 13:28:04.629649 containerd[1500]: time="2025-05-12T13:28:04.629623467Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485383" May 12 13:28:04.630286 containerd[1500]: time="2025-05-12T13:28:04.630260763Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 13:28:04.632610 containerd[1500]: time="2025-05-12T13:28:04.632578800Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 13:28:04.634480 containerd[1500]: time="2025-05-12T13:28:04.634363357Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.139870132s" May 12 13:28:04.634480 containerd[1500]: time="2025-05-12T13:28:04.634398691Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" May 12 13:28:04.648744 containerd[1500]: time="2025-05-12T13:28:04.648710081Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" May 12 13:28:05.050233 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3161354930.mount: Deactivated successfully. May 12 13:28:05.053873 containerd[1500]: time="2025-05-12T13:28:05.053812781Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 13:28:05.054526 containerd[1500]: time="2025-05-12T13:28:05.054496763Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268823" May 12 13:28:05.055109 containerd[1500]: time="2025-05-12T13:28:05.055082834Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 13:28:05.057715 containerd[1500]: time="2025-05-12T13:28:05.057678565Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 13:28:05.058271 containerd[1500]: time="2025-05-12T13:28:05.058247156Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 409.502851ms" May 12 13:28:05.058320 containerd[1500]: time="2025-05-12T13:28:05.058277178Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" May 12 13:28:05.073154 containerd[1500]: time="2025-05-12T13:28:05.073077514Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" May 12 13:28:05.534564 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3302772888.mount: Deactivated successfully. May 12 13:28:07.241522 containerd[1500]: time="2025-05-12T13:28:07.241477270Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 13:28:07.242431 containerd[1500]: time="2025-05-12T13:28:07.241935303Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=66191474" May 12 13:28:07.242908 containerd[1500]: time="2025-05-12T13:28:07.242866117Z" level=info msg="ImageCreate event name:\"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 13:28:07.246496 containerd[1500]: time="2025-05-12T13:28:07.246454087Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 13:28:07.247083 containerd[1500]: time="2025-05-12T13:28:07.247050992Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"66189079\" in 2.173933931s" May 12 13:28:07.247221 containerd[1500]: time="2025-05-12T13:28:07.247085113Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\"" May 12 13:28:11.650545 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 12 13:28:11.652013 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 12 13:28:11.774094 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 12 13:28:11.780678 (kubelet)[2268]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 12 13:28:11.824068 kubelet[2268]: E0512 13:28:11.824017 2268 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 12 13:28:11.826762 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 12 13:28:11.827026 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 12 13:28:11.829027 systemd[1]: kubelet.service: Consumed 131ms CPU time, 98.8M memory peak. May 12 13:28:12.280152 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 12 13:28:12.280287 systemd[1]: kubelet.service: Consumed 131ms CPU time, 98.8M memory peak. May 12 13:28:12.282441 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 12 13:28:12.300581 systemd[1]: Reload requested from client PID 2283 ('systemctl') (unit session-7.scope)... May 12 13:28:12.300596 systemd[1]: Reloading... May 12 13:28:12.366975 zram_generator::config[2325]: No configuration found. May 12 13:28:12.527930 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 12 13:28:12.612240 systemd[1]: Reloading finished in 311 ms. May 12 13:28:12.653718 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 12 13:28:12.657476 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 12 13:28:12.657839 systemd[1]: kubelet.service: Deactivated successfully. May 12 13:28:12.658088 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 12 13:28:12.658126 systemd[1]: kubelet.service: Consumed 85ms CPU time, 82.4M memory peak. May 12 13:28:12.659556 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 12 13:28:12.778157 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 12 13:28:12.781553 (kubelet)[2373]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 12 13:28:12.822475 kubelet[2373]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 12 13:28:12.822475 kubelet[2373]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 12 13:28:12.822475 kubelet[2373]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 12 13:28:12.822822 kubelet[2373]: I0512 13:28:12.822512 2373 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 12 13:28:14.011671 kubelet[2373]: I0512 13:28:14.011621 2373 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 12 13:28:14.011671 kubelet[2373]: I0512 13:28:14.011655 2373 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 12 13:28:14.012025 kubelet[2373]: I0512 13:28:14.011876 2373 server.go:927] "Client rotation is on, will bootstrap in background" May 12 13:28:14.037261 kubelet[2373]: E0512 13:28:14.037225 2373 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.76:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.76:6443: connect: connection refused May 12 13:28:14.038989 kubelet[2373]: I0512 13:28:14.038864 2373 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 12 13:28:14.048469 kubelet[2373]: I0512 13:28:14.048410 2373 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 12 13:28:14.049684 kubelet[2373]: I0512 13:28:14.049629 2373 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 12 13:28:14.049843 kubelet[2373]: I0512 13:28:14.049677 2373 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 12 13:28:14.049924 kubelet[2373]: I0512 13:28:14.049897 2373 topology_manager.go:138] "Creating topology manager with none policy" May 12 13:28:14.049924 kubelet[2373]: I0512 13:28:14.049907 2373 container_manager_linux.go:301] "Creating device plugin manager" May 12 13:28:14.050195 kubelet[2373]: I0512 13:28:14.050170 2373 state_mem.go:36] "Initialized new in-memory state store" May 12 13:28:14.052858 kubelet[2373]: I0512 13:28:14.052830 2373 kubelet.go:400] "Attempting to sync node with API server" May 12 13:28:14.052858 kubelet[2373]: I0512 13:28:14.052851 2373 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 12 13:28:14.053148 kubelet[2373]: I0512 13:28:14.053126 2373 kubelet.go:312] "Adding apiserver pod source" May 12 13:28:14.053438 kubelet[2373]: W0512 13:28:14.053390 2373 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.76:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.76:6443: connect: connection refused May 12 13:28:14.053474 kubelet[2373]: E0512 13:28:14.053442 2373 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.76:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.76:6443: connect: connection refused May 12 13:28:14.054009 kubelet[2373]: I0512 13:28:14.053978 2373 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 12 13:28:14.054828 kubelet[2373]: W0512 13:28:14.054780 2373 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.76:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.76:6443: connect: connection refused May 12 13:28:14.054854 kubelet[2373]: E0512 13:28:14.054828 2373 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.76:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.76:6443: connect: connection refused May 12 13:28:14.055077 kubelet[2373]: I0512 13:28:14.055050 2373 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 12 13:28:14.055483 kubelet[2373]: I0512 13:28:14.055456 2373 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 12 13:28:14.055662 kubelet[2373]: W0512 13:28:14.055639 2373 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 12 13:28:14.056751 kubelet[2373]: I0512 13:28:14.056723 2373 server.go:1264] "Started kubelet" May 12 13:28:14.058710 kubelet[2373]: I0512 13:28:14.058682 2373 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 12 13:28:14.060030 kubelet[2373]: E0512 13:28:14.059467 2373 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.76:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.76:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183ecaa002c9f8d4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-12 13:28:14.056700116 +0000 UTC m=+1.272337717,LastTimestamp:2025-05-12 13:28:14.056700116 +0000 UTC m=+1.272337717,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 12 13:28:14.060636 kubelet[2373]: I0512 13:28:14.060183 2373 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 12 13:28:14.061307 kubelet[2373]: I0512 13:28:14.061093 2373 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 12 13:28:14.061371 kubelet[2373]: I0512 13:28:14.061310 2373 volume_manager.go:291] "Starting Kubelet Volume Manager" May 12 13:28:14.061394 kubelet[2373]: I0512 13:28:14.061389 2373 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 12 13:28:14.061525 kubelet[2373]: I0512 13:28:14.061498 2373 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 12 13:28:14.061769 kubelet[2373]: E0512 13:28:14.061742 2373 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 12 13:28:14.061887 kubelet[2373]: I0512 13:28:14.061863 2373 server.go:455] "Adding debug handlers to kubelet server" May 12 13:28:14.062515 kubelet[2373]: I0512 13:28:14.062479 2373 reconciler.go:26] "Reconciler: start to sync state" May 12 13:28:14.063027 kubelet[2373]: W0512 13:28:14.062975 2373 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.76:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.76:6443: connect: connection refused May 12 13:28:14.063090 kubelet[2373]: E0512 13:28:14.063034 2373 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.76:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.76:6443: connect: connection refused May 12 13:28:14.064072 kubelet[2373]: I0512 13:28:14.063575 2373 factory.go:221] Registration of the systemd container factory successfully May 12 13:28:14.064072 kubelet[2373]: I0512 13:28:14.063670 2373 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 12 13:28:14.064072 kubelet[2373]: E0512 13:28:14.063853 2373 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.76:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.76:6443: connect: connection refused" interval="200ms" May 12 13:28:14.064982 kubelet[2373]: I0512 13:28:14.064668 2373 factory.go:221] Registration of the containerd container factory successfully May 12 13:28:14.071734 kubelet[2373]: I0512 13:28:14.071624 2373 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 12 13:28:14.072974 kubelet[2373]: I0512 13:28:14.072727 2373 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 12 13:28:14.072974 kubelet[2373]: I0512 13:28:14.072757 2373 status_manager.go:217] "Starting to sync pod status with apiserver" May 12 13:28:14.072974 kubelet[2373]: I0512 13:28:14.072772 2373 kubelet.go:2337] "Starting kubelet main sync loop" May 12 13:28:14.072974 kubelet[2373]: E0512 13:28:14.072810 2373 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 12 13:28:14.076022 kubelet[2373]: W0512 13:28:14.075898 2373 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.76:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.76:6443: connect: connection refused May 12 13:28:14.076022 kubelet[2373]: E0512 13:28:14.076016 2373 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.76:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.76:6443: connect: connection refused May 12 13:28:14.076907 kubelet[2373]: I0512 13:28:14.076885 2373 cpu_manager.go:214] "Starting CPU manager" policy="none" May 12 13:28:14.076907 kubelet[2373]: I0512 13:28:14.076899 2373 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 12 13:28:14.077016 kubelet[2373]: I0512 13:28:14.076916 2373 state_mem.go:36] "Initialized new in-memory state store" May 12 13:28:14.162648 kubelet[2373]: I0512 13:28:14.162590 2373 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 12 13:28:14.162995 kubelet[2373]: E0512 13:28:14.162934 2373 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.76:6443/api/v1/nodes\": dial tcp 10.0.0.76:6443: connect: connection refused" node="localhost" May 12 13:28:14.173114 kubelet[2373]: E0512 13:28:14.173088 2373 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 12 13:28:14.200350 kubelet[2373]: I0512 13:28:14.200316 2373 policy_none.go:49] "None policy: Start" May 12 13:28:14.201029 kubelet[2373]: I0512 13:28:14.200993 2373 memory_manager.go:170] "Starting memorymanager" policy="None" May 12 13:28:14.201029 kubelet[2373]: I0512 13:28:14.201025 2373 state_mem.go:35] "Initializing new in-memory state store" May 12 13:28:14.206990 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 12 13:28:14.220675 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 12 13:28:14.239467 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 12 13:28:14.241420 kubelet[2373]: I0512 13:28:14.241384 2373 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 12 13:28:14.241623 kubelet[2373]: I0512 13:28:14.241580 2373 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 12 13:28:14.241816 kubelet[2373]: I0512 13:28:14.241695 2373 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 12 13:28:14.242923 kubelet[2373]: E0512 13:28:14.242886 2373 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 12 13:28:14.264666 kubelet[2373]: E0512 13:28:14.264546 2373 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.76:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.76:6443: connect: connection refused" interval="400ms" May 12 13:28:14.364915 kubelet[2373]: I0512 13:28:14.364888 2373 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 12 13:28:14.365255 kubelet[2373]: E0512 13:28:14.365210 2373 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.76:6443/api/v1/nodes\": dial tcp 10.0.0.76:6443: connect: connection refused" node="localhost" May 12 13:28:14.373380 kubelet[2373]: I0512 13:28:14.373330 2373 topology_manager.go:215] "Topology Admit Handler" podUID="6ece95f10dbffa04b25ec3439a115512" podNamespace="kube-system" podName="kube-scheduler-localhost" May 12 13:28:14.374356 kubelet[2373]: I0512 13:28:14.374336 2373 topology_manager.go:215] "Topology Admit Handler" podUID="8d4f6713f2d83627690592f4d5be1518" podNamespace="kube-system" podName="kube-apiserver-localhost" May 12 13:28:14.375082 kubelet[2373]: I0512 13:28:14.375056 2373 topology_manager.go:215] "Topology Admit Handler" podUID="b20b39a8540dba87b5883a6f0f602dba" podNamespace="kube-system" podName="kube-controller-manager-localhost" May 12 13:28:14.380630 systemd[1]: Created slice kubepods-burstable-pod6ece95f10dbffa04b25ec3439a115512.slice - libcontainer container kubepods-burstable-pod6ece95f10dbffa04b25ec3439a115512.slice. May 12 13:28:14.400717 systemd[1]: Created slice kubepods-burstable-pod8d4f6713f2d83627690592f4d5be1518.slice - libcontainer container kubepods-burstable-pod8d4f6713f2d83627690592f4d5be1518.slice. May 12 13:28:14.413498 systemd[1]: Created slice kubepods-burstable-podb20b39a8540dba87b5883a6f0f602dba.slice - libcontainer container kubepods-burstable-podb20b39a8540dba87b5883a6f0f602dba.slice. May 12 13:28:14.464373 kubelet[2373]: I0512 13:28:14.464329 2373 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 12 13:28:14.464373 kubelet[2373]: I0512 13:28:14.464367 2373 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 12 13:28:14.464494 kubelet[2373]: I0512 13:28:14.464393 2373 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6ece95f10dbffa04b25ec3439a115512-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6ece95f10dbffa04b25ec3439a115512\") " pod="kube-system/kube-scheduler-localhost" May 12 13:28:14.464494 kubelet[2373]: I0512 13:28:14.464412 2373 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8d4f6713f2d83627690592f4d5be1518-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"8d4f6713f2d83627690592f4d5be1518\") " pod="kube-system/kube-apiserver-localhost" May 12 13:28:14.464494 kubelet[2373]: I0512 13:28:14.464441 2373 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8d4f6713f2d83627690592f4d5be1518-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"8d4f6713f2d83627690592f4d5be1518\") " pod="kube-system/kube-apiserver-localhost" May 12 13:28:14.464494 kubelet[2373]: I0512 13:28:14.464457 2373 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8d4f6713f2d83627690592f4d5be1518-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"8d4f6713f2d83627690592f4d5be1518\") " pod="kube-system/kube-apiserver-localhost" May 12 13:28:14.464494 kubelet[2373]: I0512 13:28:14.464474 2373 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 12 13:28:14.464599 kubelet[2373]: I0512 13:28:14.464504 2373 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 12 13:28:14.464599 kubelet[2373]: I0512 13:28:14.464525 2373 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 12 13:28:14.666096 kubelet[2373]: E0512 13:28:14.665992 2373 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.76:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.76:6443: connect: connection refused" interval="800ms" May 12 13:28:14.699739 containerd[1500]: time="2025-05-12T13:28:14.699683000Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6ece95f10dbffa04b25ec3439a115512,Namespace:kube-system,Attempt:0,}" May 12 13:28:14.703201 containerd[1500]: time="2025-05-12T13:28:14.703165611Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:8d4f6713f2d83627690592f4d5be1518,Namespace:kube-system,Attempt:0,}" May 12 13:28:14.716071 containerd[1500]: time="2025-05-12T13:28:14.716034836Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b20b39a8540dba87b5883a6f0f602dba,Namespace:kube-system,Attempt:0,}" May 12 13:28:14.766546 kubelet[2373]: I0512 13:28:14.766510 2373 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 12 13:28:14.766819 kubelet[2373]: E0512 13:28:14.766785 2373 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.76:6443/api/v1/nodes\": dial tcp 10.0.0.76:6443: connect: connection refused" node="localhost" May 12 13:28:15.066096 kubelet[2373]: W0512 13:28:15.066024 2373 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.76:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.76:6443: connect: connection refused May 12 13:28:15.066096 kubelet[2373]: E0512 13:28:15.066085 2373 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.76:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.76:6443: connect: connection refused May 12 13:28:15.077706 kubelet[2373]: W0512 13:28:15.077657 2373 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.76:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.76:6443: connect: connection refused May 12 13:28:15.077706 kubelet[2373]: E0512 13:28:15.077708 2373 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.76:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.76:6443: connect: connection refused May 12 13:28:15.218792 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3481094921.mount: Deactivated successfully. May 12 13:28:15.222597 containerd[1500]: time="2025-05-12T13:28:15.222560625Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 12 13:28:15.223270 containerd[1500]: time="2025-05-12T13:28:15.223241686Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" May 12 13:28:15.224078 containerd[1500]: time="2025-05-12T13:28:15.224021748Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 12 13:28:15.226776 containerd[1500]: time="2025-05-12T13:28:15.226729490Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 12 13:28:15.227437 containerd[1500]: time="2025-05-12T13:28:15.227218182Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 522.35786ms" May 12 13:28:15.227737 containerd[1500]: time="2025-05-12T13:28:15.227714745Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 12 13:28:15.228397 containerd[1500]: time="2025-05-12T13:28:15.228340233Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" May 12 13:28:15.230662 containerd[1500]: time="2025-05-12T13:28:15.230623406Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 12 13:28:15.231212 containerd[1500]: time="2025-05-12T13:28:15.231184371Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" May 12 13:28:15.233306 containerd[1500]: time="2025-05-12T13:28:15.233166147Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 531.332144ms" May 12 13:28:15.238808 containerd[1500]: time="2025-05-12T13:28:15.238773242Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 520.888669ms" May 12 13:28:15.240826 containerd[1500]: time="2025-05-12T13:28:15.240699445Z" level=info msg="connecting to shim ba821f800b40d90356075c45d92a3b43d3ab8c562903d0018b3b30450d203c69" address="unix:///run/containerd/s/79d80c9d7f9a03a6c3cf07bbd2dd9acda22f32340d414a1eb3e865f1d8267293" namespace=k8s.io protocol=ttrpc version=3 May 12 13:28:15.252563 containerd[1500]: time="2025-05-12T13:28:15.252423702Z" level=info msg="connecting to shim 1511b25434c381413a0c7bc651ee93353c377aa4ba1a3e97faf42e12edbb7169" address="unix:///run/containerd/s/5df271ae1e169a671e34b5ebd027eaaa12a053742c5ba492d92c86acf0de40c5" namespace=k8s.io protocol=ttrpc version=3 May 12 13:28:15.263461 containerd[1500]: time="2025-05-12T13:28:15.263313642Z" level=info msg="connecting to shim 200d62060f00400ddef73a0baaf7423968ee423ed4331376a027cba80da47823" address="unix:///run/containerd/s/adeb3665304db4352d4174e2663cf5f4d624d31e9a4182885a89f7bcb8432aca" namespace=k8s.io protocol=ttrpc version=3 May 12 13:28:15.272127 systemd[1]: Started cri-containerd-ba821f800b40d90356075c45d92a3b43d3ab8c562903d0018b3b30450d203c69.scope - libcontainer container ba821f800b40d90356075c45d92a3b43d3ab8c562903d0018b3b30450d203c69. May 12 13:28:15.275843 systemd[1]: Started cri-containerd-1511b25434c381413a0c7bc651ee93353c377aa4ba1a3e97faf42e12edbb7169.scope - libcontainer container 1511b25434c381413a0c7bc651ee93353c377aa4ba1a3e97faf42e12edbb7169. May 12 13:28:15.288092 systemd[1]: Started cri-containerd-200d62060f00400ddef73a0baaf7423968ee423ed4331376a027cba80da47823.scope - libcontainer container 200d62060f00400ddef73a0baaf7423968ee423ed4331376a027cba80da47823. May 12 13:28:15.321076 containerd[1500]: time="2025-05-12T13:28:15.320483430Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6ece95f10dbffa04b25ec3439a115512,Namespace:kube-system,Attempt:0,} returns sandbox id \"1511b25434c381413a0c7bc651ee93353c377aa4ba1a3e97faf42e12edbb7169\"" May 12 13:28:15.325603 containerd[1500]: time="2025-05-12T13:28:15.325565796Z" level=info msg="CreateContainer within sandbox \"1511b25434c381413a0c7bc651ee93353c377aa4ba1a3e97faf42e12edbb7169\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 12 13:28:15.327047 containerd[1500]: time="2025-05-12T13:28:15.327013775Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b20b39a8540dba87b5883a6f0f602dba,Namespace:kube-system,Attempt:0,} returns sandbox id \"200d62060f00400ddef73a0baaf7423968ee423ed4331376a027cba80da47823\"" May 12 13:28:15.327855 containerd[1500]: time="2025-05-12T13:28:15.327828834Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:8d4f6713f2d83627690592f4d5be1518,Namespace:kube-system,Attempt:0,} returns sandbox id \"ba821f800b40d90356075c45d92a3b43d3ab8c562903d0018b3b30450d203c69\"" May 12 13:28:15.329410 containerd[1500]: time="2025-05-12T13:28:15.329362229Z" level=info msg="CreateContainer within sandbox \"200d62060f00400ddef73a0baaf7423968ee423ed4331376a027cba80da47823\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 12 13:28:15.330001 containerd[1500]: time="2025-05-12T13:28:15.329973015Z" level=info msg="CreateContainer within sandbox \"ba821f800b40d90356075c45d92a3b43d3ab8c562903d0018b3b30450d203c69\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 12 13:28:15.335266 containerd[1500]: time="2025-05-12T13:28:15.335230930Z" level=info msg="Container 03df356ce7da9ec7b7a2445a422ded4edf2e96b8b72577aa6ca724ae7d9e62c8: CDI devices from CRI Config.CDIDevices: []" May 12 13:28:15.338261 containerd[1500]: time="2025-05-12T13:28:15.338223530Z" level=info msg="Container ed24f1177f930d9638beb4075bc04c0b4f09948daa175d53bf60791d7bc3b720: CDI devices from CRI Config.CDIDevices: []" May 12 13:28:15.342640 containerd[1500]: time="2025-05-12T13:28:15.342600225Z" level=info msg="Container 2b5d6fcac786a75312d13a63f1edc424bf7cf4272ef7a49648de7be42552cb3b: CDI devices from CRI Config.CDIDevices: []" May 12 13:28:15.347264 containerd[1500]: time="2025-05-12T13:28:15.347215953Z" level=info msg="CreateContainer within sandbox \"ba821f800b40d90356075c45d92a3b43d3ab8c562903d0018b3b30450d203c69\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"ed24f1177f930d9638beb4075bc04c0b4f09948daa175d53bf60791d7bc3b720\"" May 12 13:28:15.348066 containerd[1500]: time="2025-05-12T13:28:15.348025219Z" level=info msg="CreateContainer within sandbox \"1511b25434c381413a0c7bc651ee93353c377aa4ba1a3e97faf42e12edbb7169\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"03df356ce7da9ec7b7a2445a422ded4edf2e96b8b72577aa6ca724ae7d9e62c8\"" May 12 13:28:15.348609 containerd[1500]: time="2025-05-12T13:28:15.348571442Z" level=info msg="StartContainer for \"03df356ce7da9ec7b7a2445a422ded4edf2e96b8b72577aa6ca724ae7d9e62c8\"" May 12 13:28:15.348695 containerd[1500]: time="2025-05-12T13:28:15.348646791Z" level=info msg="StartContainer for \"ed24f1177f930d9638beb4075bc04c0b4f09948daa175d53bf60791d7bc3b720\"" May 12 13:28:15.349655 containerd[1500]: time="2025-05-12T13:28:15.349629409Z" level=info msg="connecting to shim 03df356ce7da9ec7b7a2445a422ded4edf2e96b8b72577aa6ca724ae7d9e62c8" address="unix:///run/containerd/s/5df271ae1e169a671e34b5ebd027eaaa12a053742c5ba492d92c86acf0de40c5" protocol=ttrpc version=3 May 12 13:28:15.349852 containerd[1500]: time="2025-05-12T13:28:15.349824774Z" level=info msg="connecting to shim ed24f1177f930d9638beb4075bc04c0b4f09948daa175d53bf60791d7bc3b720" address="unix:///run/containerd/s/79d80c9d7f9a03a6c3cf07bbd2dd9acda22f32340d414a1eb3e865f1d8267293" protocol=ttrpc version=3 May 12 13:28:15.351391 containerd[1500]: time="2025-05-12T13:28:15.351344346Z" level=info msg="CreateContainer within sandbox \"200d62060f00400ddef73a0baaf7423968ee423ed4331376a027cba80da47823\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"2b5d6fcac786a75312d13a63f1edc424bf7cf4272ef7a49648de7be42552cb3b\"" May 12 13:28:15.352794 containerd[1500]: time="2025-05-12T13:28:15.352752213Z" level=info msg="StartContainer for \"2b5d6fcac786a75312d13a63f1edc424bf7cf4272ef7a49648de7be42552cb3b\"" May 12 13:28:15.354158 containerd[1500]: time="2025-05-12T13:28:15.353749933Z" level=info msg="connecting to shim 2b5d6fcac786a75312d13a63f1edc424bf7cf4272ef7a49648de7be42552cb3b" address="unix:///run/containerd/s/adeb3665304db4352d4174e2663cf5f4d624d31e9a4182885a89f7bcb8432aca" protocol=ttrpc version=3 May 12 13:28:15.371186 systemd[1]: Started cri-containerd-03df356ce7da9ec7b7a2445a422ded4edf2e96b8b72577aa6ca724ae7d9e62c8.scope - libcontainer container 03df356ce7da9ec7b7a2445a422ded4edf2e96b8b72577aa6ca724ae7d9e62c8. May 12 13:28:15.372287 systemd[1]: Started cri-containerd-ed24f1177f930d9638beb4075bc04c0b4f09948daa175d53bf60791d7bc3b720.scope - libcontainer container ed24f1177f930d9638beb4075bc04c0b4f09948daa175d53bf60791d7bc3b720. May 12 13:28:15.376494 systemd[1]: Started cri-containerd-2b5d6fcac786a75312d13a63f1edc424bf7cf4272ef7a49648de7be42552cb3b.scope - libcontainer container 2b5d6fcac786a75312d13a63f1edc424bf7cf4272ef7a49648de7be42552cb3b. May 12 13:28:15.401470 kubelet[2373]: W0512 13:28:15.401395 2373 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.76:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.76:6443: connect: connection refused May 12 13:28:15.401470 kubelet[2373]: E0512 13:28:15.401470 2373 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.76:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.76:6443: connect: connection refused May 12 13:28:15.418877 containerd[1500]: time="2025-05-12T13:28:15.418718340Z" level=info msg="StartContainer for \"2b5d6fcac786a75312d13a63f1edc424bf7cf4272ef7a49648de7be42552cb3b\" returns successfully" May 12 13:28:15.435687 containerd[1500]: time="2025-05-12T13:28:15.431186261Z" level=info msg="StartContainer for \"03df356ce7da9ec7b7a2445a422ded4edf2e96b8b72577aa6ca724ae7d9e62c8\" returns successfully" May 12 13:28:15.464311 containerd[1500]: time="2025-05-12T13:28:15.459477589Z" level=info msg="StartContainer for \"ed24f1177f930d9638beb4075bc04c0b4f09948daa175d53bf60791d7bc3b720\" returns successfully" May 12 13:28:15.470248 kubelet[2373]: E0512 13:28:15.470207 2373 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.76:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.76:6443: connect: connection refused" interval="1.6s" May 12 13:28:15.476557 kubelet[2373]: W0512 13:28:15.476476 2373 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.76:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.76:6443: connect: connection refused May 12 13:28:15.476557 kubelet[2373]: E0512 13:28:15.476537 2373 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.76:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.76:6443: connect: connection refused May 12 13:28:15.572097 kubelet[2373]: I0512 13:28:15.571990 2373 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 12 13:28:15.572856 kubelet[2373]: E0512 13:28:15.572824 2373 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.76:6443/api/v1/nodes\": dial tcp 10.0.0.76:6443: connect: connection refused" node="localhost" May 12 13:28:17.075937 kubelet[2373]: E0512 13:28:17.075894 2373 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" May 12 13:28:17.174689 kubelet[2373]: I0512 13:28:17.174624 2373 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 12 13:28:17.182591 kubelet[2373]: I0512 13:28:17.182542 2373 kubelet_node_status.go:76] "Successfully registered node" node="localhost" May 12 13:28:17.188857 kubelet[2373]: E0512 13:28:17.188830 2373 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 12 13:28:17.289861 kubelet[2373]: E0512 13:28:17.289827 2373 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 12 13:28:17.390637 kubelet[2373]: E0512 13:28:17.390371 2373 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 12 13:28:18.057642 kubelet[2373]: I0512 13:28:18.057611 2373 apiserver.go:52] "Watching apiserver" May 12 13:28:18.062141 kubelet[2373]: I0512 13:28:18.062116 2373 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 12 13:28:18.751934 systemd[1]: Reload requested from client PID 2649 ('systemctl') (unit session-7.scope)... May 12 13:28:18.752010 systemd[1]: Reloading... May 12 13:28:18.818981 zram_generator::config[2693]: No configuration found. May 12 13:28:18.888451 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 12 13:28:18.985453 systemd[1]: Reloading finished in 233 ms. May 12 13:28:19.004409 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 12 13:28:19.018906 systemd[1]: kubelet.service: Deactivated successfully. May 12 13:28:19.019199 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 12 13:28:19.019261 systemd[1]: kubelet.service: Consumed 1.583s CPU time, 115.8M memory peak. May 12 13:28:19.021271 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 12 13:28:19.134009 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 12 13:28:19.144314 (kubelet)[2734]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 12 13:28:19.184959 kubelet[2734]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 12 13:28:19.184959 kubelet[2734]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 12 13:28:19.184959 kubelet[2734]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 12 13:28:19.185346 kubelet[2734]: I0512 13:28:19.185012 2734 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 12 13:28:19.188903 kubelet[2734]: I0512 13:28:19.188854 2734 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 12 13:28:19.188903 kubelet[2734]: I0512 13:28:19.188880 2734 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 12 13:28:19.189078 kubelet[2734]: I0512 13:28:19.189054 2734 server.go:927] "Client rotation is on, will bootstrap in background" May 12 13:28:19.190363 kubelet[2734]: I0512 13:28:19.190342 2734 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 12 13:28:19.191486 kubelet[2734]: I0512 13:28:19.191464 2734 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 12 13:28:19.198462 kubelet[2734]: I0512 13:28:19.198402 2734 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 12 13:28:19.198623 kubelet[2734]: I0512 13:28:19.198597 2734 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 12 13:28:19.198766 kubelet[2734]: I0512 13:28:19.198623 2734 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 12 13:28:19.198831 kubelet[2734]: I0512 13:28:19.198773 2734 topology_manager.go:138] "Creating topology manager with none policy" May 12 13:28:19.198831 kubelet[2734]: I0512 13:28:19.198781 2734 container_manager_linux.go:301] "Creating device plugin manager" May 12 13:28:19.198831 kubelet[2734]: I0512 13:28:19.198816 2734 state_mem.go:36] "Initialized new in-memory state store" May 12 13:28:19.198919 kubelet[2734]: I0512 13:28:19.198909 2734 kubelet.go:400] "Attempting to sync node with API server" May 12 13:28:19.198945 kubelet[2734]: I0512 13:28:19.198922 2734 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 12 13:28:19.198986 kubelet[2734]: I0512 13:28:19.198975 2734 kubelet.go:312] "Adding apiserver pod source" May 12 13:28:19.199009 kubelet[2734]: I0512 13:28:19.198995 2734 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 12 13:28:19.203075 kubelet[2734]: I0512 13:28:19.199454 2734 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 12 13:28:19.203075 kubelet[2734]: I0512 13:28:19.199606 2734 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 12 13:28:19.203075 kubelet[2734]: I0512 13:28:19.199944 2734 server.go:1264] "Started kubelet" May 12 13:28:19.203075 kubelet[2734]: I0512 13:28:19.200752 2734 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 12 13:28:19.203075 kubelet[2734]: I0512 13:28:19.201388 2734 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 12 13:28:19.203075 kubelet[2734]: I0512 13:28:19.202216 2734 server.go:455] "Adding debug handlers to kubelet server" May 12 13:28:19.204044 kubelet[2734]: I0512 13:28:19.201003 2734 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 12 13:28:19.206956 kubelet[2734]: I0512 13:28:19.205264 2734 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 12 13:28:19.206956 kubelet[2734]: I0512 13:28:19.206130 2734 volume_manager.go:291] "Starting Kubelet Volume Manager" May 12 13:28:19.206956 kubelet[2734]: I0512 13:28:19.206219 2734 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 12 13:28:19.206956 kubelet[2734]: I0512 13:28:19.206340 2734 reconciler.go:26] "Reconciler: start to sync state" May 12 13:28:19.210035 kubelet[2734]: E0512 13:28:19.208997 2734 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 12 13:28:19.210119 kubelet[2734]: I0512 13:28:19.210088 2734 factory.go:221] Registration of the systemd container factory successfully May 12 13:28:19.210255 kubelet[2734]: I0512 13:28:19.210230 2734 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 12 13:28:19.211665 kubelet[2734]: I0512 13:28:19.211642 2734 factory.go:221] Registration of the containerd container factory successfully May 12 13:28:19.222533 kubelet[2734]: I0512 13:28:19.222477 2734 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 12 13:28:19.227973 kubelet[2734]: I0512 13:28:19.227816 2734 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 12 13:28:19.227973 kubelet[2734]: I0512 13:28:19.227854 2734 status_manager.go:217] "Starting to sync pod status with apiserver" May 12 13:28:19.227973 kubelet[2734]: I0512 13:28:19.227869 2734 kubelet.go:2337] "Starting kubelet main sync loop" May 12 13:28:19.227973 kubelet[2734]: E0512 13:28:19.227907 2734 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 12 13:28:19.257064 kubelet[2734]: I0512 13:28:19.256226 2734 cpu_manager.go:214] "Starting CPU manager" policy="none" May 12 13:28:19.257064 kubelet[2734]: I0512 13:28:19.256242 2734 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 12 13:28:19.257064 kubelet[2734]: I0512 13:28:19.256261 2734 state_mem.go:36] "Initialized new in-memory state store" May 12 13:28:19.257064 kubelet[2734]: I0512 13:28:19.256404 2734 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 12 13:28:19.257064 kubelet[2734]: I0512 13:28:19.256415 2734 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 12 13:28:19.257064 kubelet[2734]: I0512 13:28:19.256431 2734 policy_none.go:49] "None policy: Start" May 12 13:28:19.257064 kubelet[2734]: I0512 13:28:19.256902 2734 memory_manager.go:170] "Starting memorymanager" policy="None" May 12 13:28:19.257064 kubelet[2734]: I0512 13:28:19.256918 2734 state_mem.go:35] "Initializing new in-memory state store" May 12 13:28:19.257064 kubelet[2734]: I0512 13:28:19.257045 2734 state_mem.go:75] "Updated machine memory state" May 12 13:28:19.261332 kubelet[2734]: I0512 13:28:19.261296 2734 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 12 13:28:19.261565 kubelet[2734]: I0512 13:28:19.261519 2734 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 12 13:28:19.261639 kubelet[2734]: I0512 13:28:19.261628 2734 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 12 13:28:19.308043 kubelet[2734]: I0512 13:28:19.308003 2734 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 12 13:28:19.313321 kubelet[2734]: I0512 13:28:19.313290 2734 kubelet_node_status.go:112] "Node was previously registered" node="localhost" May 12 13:28:19.313431 kubelet[2734]: I0512 13:28:19.313362 2734 kubelet_node_status.go:76] "Successfully registered node" node="localhost" May 12 13:28:19.328718 kubelet[2734]: I0512 13:28:19.328676 2734 topology_manager.go:215] "Topology Admit Handler" podUID="6ece95f10dbffa04b25ec3439a115512" podNamespace="kube-system" podName="kube-scheduler-localhost" May 12 13:28:19.328807 kubelet[2734]: I0512 13:28:19.328797 2734 topology_manager.go:215] "Topology Admit Handler" podUID="8d4f6713f2d83627690592f4d5be1518" podNamespace="kube-system" podName="kube-apiserver-localhost" May 12 13:28:19.328850 kubelet[2734]: I0512 13:28:19.328832 2734 topology_manager.go:215] "Topology Admit Handler" podUID="b20b39a8540dba87b5883a6f0f602dba" podNamespace="kube-system" podName="kube-controller-manager-localhost" May 12 13:28:19.348972 kubelet[2734]: E0512 13:28:19.348906 2734 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" May 12 13:28:19.348972 kubelet[2734]: E0512 13:28:19.348906 2734 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 12 13:28:19.507918 kubelet[2734]: I0512 13:28:19.507801 2734 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8d4f6713f2d83627690592f4d5be1518-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"8d4f6713f2d83627690592f4d5be1518\") " pod="kube-system/kube-apiserver-localhost" May 12 13:28:19.507918 kubelet[2734]: I0512 13:28:19.507849 2734 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 12 13:28:19.507918 kubelet[2734]: I0512 13:28:19.507869 2734 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 12 13:28:19.507918 kubelet[2734]: I0512 13:28:19.507886 2734 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 12 13:28:19.507918 kubelet[2734]: I0512 13:28:19.507903 2734 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 12 13:28:19.508139 kubelet[2734]: I0512 13:28:19.507919 2734 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6ece95f10dbffa04b25ec3439a115512-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6ece95f10dbffa04b25ec3439a115512\") " pod="kube-system/kube-scheduler-localhost" May 12 13:28:19.508139 kubelet[2734]: I0512 13:28:19.507934 2734 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8d4f6713f2d83627690592f4d5be1518-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"8d4f6713f2d83627690592f4d5be1518\") " pod="kube-system/kube-apiserver-localhost" May 12 13:28:19.508139 kubelet[2734]: I0512 13:28:19.507975 2734 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8d4f6713f2d83627690592f4d5be1518-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"8d4f6713f2d83627690592f4d5be1518\") " pod="kube-system/kube-apiserver-localhost" May 12 13:28:19.508139 kubelet[2734]: I0512 13:28:19.507991 2734 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 12 13:28:20.199415 kubelet[2734]: I0512 13:28:20.199324 2734 apiserver.go:52] "Watching apiserver" May 12 13:28:20.206790 kubelet[2734]: I0512 13:28:20.206715 2734 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 12 13:28:20.250376 kubelet[2734]: E0512 13:28:20.250347 2734 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 12 13:28:20.260970 kubelet[2734]: I0512 13:28:20.260877 2734 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.260843758 podStartE2EDuration="2.260843758s" podCreationTimestamp="2025-05-12 13:28:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-12 13:28:20.260749775 +0000 UTC m=+1.113123348" watchObservedRunningTime="2025-05-12 13:28:20.260843758 +0000 UTC m=+1.113217371" May 12 13:28:20.275481 kubelet[2734]: I0512 13:28:20.275412 2734 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.275394455 podStartE2EDuration="1.275394455s" podCreationTimestamp="2025-05-12 13:28:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-12 13:28:20.268127142 +0000 UTC m=+1.120500755" watchObservedRunningTime="2025-05-12 13:28:20.275394455 +0000 UTC m=+1.127768068" May 12 13:28:20.284940 kubelet[2734]: I0512 13:28:20.284883 2734 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.284867083 podStartE2EDuration="2.284867083s" podCreationTimestamp="2025-05-12 13:28:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-12 13:28:20.275554958 +0000 UTC m=+1.127928571" watchObservedRunningTime="2025-05-12 13:28:20.284867083 +0000 UTC m=+1.137240696" May 12 13:28:24.026836 sudo[1692]: pam_unix(sudo:session): session closed for user root May 12 13:28:24.028853 sshd[1691]: Connection closed by 10.0.0.1 port 38316 May 12 13:28:24.029610 sshd-session[1688]: pam_unix(sshd:session): session closed for user core May 12 13:28:24.034228 systemd[1]: sshd@6-10.0.0.76:22-10.0.0.1:38316.service: Deactivated successfully. May 12 13:28:24.036128 systemd[1]: session-7.scope: Deactivated successfully. May 12 13:28:24.036286 systemd[1]: session-7.scope: Consumed 7.023s CPU time, 242M memory peak. May 12 13:28:24.037163 systemd-logind[1472]: Session 7 logged out. Waiting for processes to exit. May 12 13:28:24.038081 systemd-logind[1472]: Removed session 7. May 12 13:28:33.499768 update_engine[1478]: I20250512 13:28:33.499477 1478 update_attempter.cc:509] Updating boot flags... May 12 13:28:33.526983 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 42 scanned by (udev-worker) (2830) May 12 13:28:33.573100 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 42 scanned by (udev-worker) (2829) May 12 13:28:35.064236 kubelet[2734]: I0512 13:28:35.064199 2734 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 12 13:28:35.068882 containerd[1500]: time="2025-05-12T13:28:35.068845730Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 12 13:28:35.070171 kubelet[2734]: I0512 13:28:35.069123 2734 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 12 13:28:35.828046 kubelet[2734]: I0512 13:28:35.827999 2734 topology_manager.go:215] "Topology Admit Handler" podUID="37a6ec83-0ba1-4a5c-8277-b6e3226f89e4" podNamespace="kube-system" podName="kube-proxy-t9t67" May 12 13:28:35.838816 systemd[1]: Created slice kubepods-besteffort-pod37a6ec83_0ba1_4a5c_8277_b6e3226f89e4.slice - libcontainer container kubepods-besteffort-pod37a6ec83_0ba1_4a5c_8277_b6e3226f89e4.slice. May 12 13:28:36.022036 kubelet[2734]: I0512 13:28:36.021939 2734 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/37a6ec83-0ba1-4a5c-8277-b6e3226f89e4-xtables-lock\") pod \"kube-proxy-t9t67\" (UID: \"37a6ec83-0ba1-4a5c-8277-b6e3226f89e4\") " pod="kube-system/kube-proxy-t9t67" May 12 13:28:36.022036 kubelet[2734]: I0512 13:28:36.022036 2734 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/37a6ec83-0ba1-4a5c-8277-b6e3226f89e4-lib-modules\") pod \"kube-proxy-t9t67\" (UID: \"37a6ec83-0ba1-4a5c-8277-b6e3226f89e4\") " pod="kube-system/kube-proxy-t9t67" May 12 13:28:36.022197 kubelet[2734]: I0512 13:28:36.022057 2734 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/37a6ec83-0ba1-4a5c-8277-b6e3226f89e4-kube-proxy\") pod \"kube-proxy-t9t67\" (UID: \"37a6ec83-0ba1-4a5c-8277-b6e3226f89e4\") " pod="kube-system/kube-proxy-t9t67" May 12 13:28:36.022197 kubelet[2734]: I0512 13:28:36.022076 2734 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5lfzh\" (UniqueName: \"kubernetes.io/projected/37a6ec83-0ba1-4a5c-8277-b6e3226f89e4-kube-api-access-5lfzh\") pod \"kube-proxy-t9t67\" (UID: \"37a6ec83-0ba1-4a5c-8277-b6e3226f89e4\") " pod="kube-system/kube-proxy-t9t67" May 12 13:28:36.132251 kubelet[2734]: I0512 13:28:36.132078 2734 topology_manager.go:215] "Topology Admit Handler" podUID="3a7b61cf-887a-44ed-b429-1758b17ccfef" podNamespace="tigera-operator" podName="tigera-operator-797db67f8-c6ct9" May 12 13:28:36.141919 systemd[1]: Created slice kubepods-besteffort-pod3a7b61cf_887a_44ed_b429_1758b17ccfef.slice - libcontainer container kubepods-besteffort-pod3a7b61cf_887a_44ed_b429_1758b17ccfef.slice. May 12 13:28:36.223274 kubelet[2734]: I0512 13:28:36.223223 2734 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tfgfd\" (UniqueName: \"kubernetes.io/projected/3a7b61cf-887a-44ed-b429-1758b17ccfef-kube-api-access-tfgfd\") pod \"tigera-operator-797db67f8-c6ct9\" (UID: \"3a7b61cf-887a-44ed-b429-1758b17ccfef\") " pod="tigera-operator/tigera-operator-797db67f8-c6ct9" May 12 13:28:36.223274 kubelet[2734]: I0512 13:28:36.223261 2734 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/3a7b61cf-887a-44ed-b429-1758b17ccfef-var-lib-calico\") pod \"tigera-operator-797db67f8-c6ct9\" (UID: \"3a7b61cf-887a-44ed-b429-1758b17ccfef\") " pod="tigera-operator/tigera-operator-797db67f8-c6ct9" May 12 13:28:36.450192 containerd[1500]: time="2025-05-12T13:28:36.449841642Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-797db67f8-c6ct9,Uid:3a7b61cf-887a-44ed-b429-1758b17ccfef,Namespace:tigera-operator,Attempt:0,}" May 12 13:28:36.450864 containerd[1500]: time="2025-05-12T13:28:36.450798022Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-t9t67,Uid:37a6ec83-0ba1-4a5c-8277-b6e3226f89e4,Namespace:kube-system,Attempt:0,}" May 12 13:28:36.500077 containerd[1500]: time="2025-05-12T13:28:36.500018428Z" level=info msg="connecting to shim 68bcebb2268fd7c0b03fc905233c7d67990afab66263a336144fb54a05314254" address="unix:///run/containerd/s/0e8abcd4b96a1b8d80d04ab8d2de940730b63979a9de8eb38b245b364417108a" namespace=k8s.io protocol=ttrpc version=3 May 12 13:28:36.510563 containerd[1500]: time="2025-05-12T13:28:36.510527656Z" level=info msg="connecting to shim 3aead03c18ec5154ef31bab655c4e34cf33016d10fb023467d315e384c52e126" address="unix:///run/containerd/s/1f2f066969fc9e7d92384c334f00f18d751f18baae63ad8991cfb4edcb7c6d38" namespace=k8s.io protocol=ttrpc version=3 May 12 13:28:36.552142 systemd[1]: Started cri-containerd-3aead03c18ec5154ef31bab655c4e34cf33016d10fb023467d315e384c52e126.scope - libcontainer container 3aead03c18ec5154ef31bab655c4e34cf33016d10fb023467d315e384c52e126. May 12 13:28:36.553816 systemd[1]: Started cri-containerd-68bcebb2268fd7c0b03fc905233c7d67990afab66263a336144fb54a05314254.scope - libcontainer container 68bcebb2268fd7c0b03fc905233c7d67990afab66263a336144fb54a05314254. May 12 13:28:36.585290 containerd[1500]: time="2025-05-12T13:28:36.585237320Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-t9t67,Uid:37a6ec83-0ba1-4a5c-8277-b6e3226f89e4,Namespace:kube-system,Attempt:0,} returns sandbox id \"68bcebb2268fd7c0b03fc905233c7d67990afab66263a336144fb54a05314254\"" May 12 13:28:36.591648 containerd[1500]: time="2025-05-12T13:28:36.591614285Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-797db67f8-c6ct9,Uid:3a7b61cf-887a-44ed-b429-1758b17ccfef,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"3aead03c18ec5154ef31bab655c4e34cf33016d10fb023467d315e384c52e126\"" May 12 13:28:36.592072 containerd[1500]: time="2025-05-12T13:28:36.592036072Z" level=info msg="CreateContainer within sandbox \"68bcebb2268fd7c0b03fc905233c7d67990afab66263a336144fb54a05314254\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 12 13:28:36.595644 containerd[1500]: time="2025-05-12T13:28:36.595516973Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\"" May 12 13:28:36.603053 containerd[1500]: time="2025-05-12T13:28:36.603020209Z" level=info msg="Container 23908ac673461ae61a245e7c26826fb7deb1ac39d61c681d33536b623328c327: CDI devices from CRI Config.CDIDevices: []" May 12 13:28:36.617348 containerd[1500]: time="2025-05-12T13:28:36.617299916Z" level=info msg="CreateContainer within sandbox \"68bcebb2268fd7c0b03fc905233c7d67990afab66263a336144fb54a05314254\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"23908ac673461ae61a245e7c26826fb7deb1ac39d61c681d33536b623328c327\"" May 12 13:28:36.617780 containerd[1500]: time="2025-05-12T13:28:36.617746665Z" level=info msg="StartContainer for \"23908ac673461ae61a245e7c26826fb7deb1ac39d61c681d33536b623328c327\"" May 12 13:28:36.619106 containerd[1500]: time="2025-05-12T13:28:36.619078949Z" level=info msg="connecting to shim 23908ac673461ae61a245e7c26826fb7deb1ac39d61c681d33536b623328c327" address="unix:///run/containerd/s/0e8abcd4b96a1b8d80d04ab8d2de940730b63979a9de8eb38b245b364417108a" protocol=ttrpc version=3 May 12 13:28:36.640161 systemd[1]: Started cri-containerd-23908ac673461ae61a245e7c26826fb7deb1ac39d61c681d33536b623328c327.scope - libcontainer container 23908ac673461ae61a245e7c26826fb7deb1ac39d61c681d33536b623328c327. May 12 13:28:36.673650 containerd[1500]: time="2025-05-12T13:28:36.673612893Z" level=info msg="StartContainer for \"23908ac673461ae61a245e7c26826fb7deb1ac39d61c681d33536b623328c327\" returns successfully" May 12 13:28:38.005577 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1726571290.mount: Deactivated successfully. May 12 13:28:38.230205 containerd[1500]: time="2025-05-12T13:28:38.230149791Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 13:28:38.230715 containerd[1500]: time="2025-05-12T13:28:38.230685062Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.7: active requests=0, bytes read=19323084" May 12 13:28:38.231393 containerd[1500]: time="2025-05-12T13:28:38.231364141Z" level=info msg="ImageCreate event name:\"sha256:27f7c2cfac802523e44ecd16453a4cc992f6c7d610c13054f2715a7cb4370565\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 13:28:38.233279 containerd[1500]: time="2025-05-12T13:28:38.233250050Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 13:28:38.233985 containerd[1500]: time="2025-05-12T13:28:38.233928689Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.7\" with image id \"sha256:27f7c2cfac802523e44ecd16453a4cc992f6c7d610c13054f2715a7cb4370565\", repo tag \"quay.io/tigera/operator:v1.36.7\", repo digest \"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\", size \"19319079\" in 1.638378193s" May 12 13:28:38.234032 containerd[1500]: time="2025-05-12T13:28:38.234011014Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\" returns image reference \"sha256:27f7c2cfac802523e44ecd16453a4cc992f6c7d610c13054f2715a7cb4370565\"" May 12 13:28:38.243926 containerd[1500]: time="2025-05-12T13:28:38.243874063Z" level=info msg="CreateContainer within sandbox \"3aead03c18ec5154ef31bab655c4e34cf33016d10fb023467d315e384c52e126\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" May 12 13:28:38.265138 containerd[1500]: time="2025-05-12T13:28:38.264513414Z" level=info msg="Container 18122601cef19ce7c9a2f55d77f9e3731cbe71f884cdbe709e1cb49e3273d6c2: CDI devices from CRI Config.CDIDevices: []" May 12 13:28:38.270092 containerd[1500]: time="2025-05-12T13:28:38.270057214Z" level=info msg="CreateContainer within sandbox \"3aead03c18ec5154ef31bab655c4e34cf33016d10fb023467d315e384c52e126\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"18122601cef19ce7c9a2f55d77f9e3731cbe71f884cdbe709e1cb49e3273d6c2\"" May 12 13:28:38.272420 containerd[1500]: time="2025-05-12T13:28:38.272388269Z" level=info msg="StartContainer for \"18122601cef19ce7c9a2f55d77f9e3731cbe71f884cdbe709e1cb49e3273d6c2\"" May 12 13:28:38.272420 containerd[1500]: time="2025-05-12T13:28:38.273183675Z" level=info msg="connecting to shim 18122601cef19ce7c9a2f55d77f9e3731cbe71f884cdbe709e1cb49e3273d6c2" address="unix:///run/containerd/s/1f2f066969fc9e7d92384c334f00f18d751f18baae63ad8991cfb4edcb7c6d38" protocol=ttrpc version=3 May 12 13:28:38.296137 systemd[1]: Started cri-containerd-18122601cef19ce7c9a2f55d77f9e3731cbe71f884cdbe709e1cb49e3273d6c2.scope - libcontainer container 18122601cef19ce7c9a2f55d77f9e3731cbe71f884cdbe709e1cb49e3273d6c2. May 12 13:28:38.322440 containerd[1500]: time="2025-05-12T13:28:38.322402596Z" level=info msg="StartContainer for \"18122601cef19ce7c9a2f55d77f9e3731cbe71f884cdbe709e1cb49e3273d6c2\" returns successfully" May 12 13:28:39.248680 kubelet[2734]: I0512 13:28:39.248615 2734 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-t9t67" podStartSLOduration=4.248588766 podStartE2EDuration="4.248588766s" podCreationTimestamp="2025-05-12 13:28:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-12 13:28:37.287140401 +0000 UTC m=+18.139514014" watchObservedRunningTime="2025-05-12 13:28:39.248588766 +0000 UTC m=+20.100962339" May 12 13:28:39.293218 kubelet[2734]: I0512 13:28:39.292513 2734 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-797db67f8-c6ct9" podStartSLOduration=1.647633812 podStartE2EDuration="3.292497345s" podCreationTimestamp="2025-05-12 13:28:36 +0000 UTC" firstStartedPulling="2025-05-12 13:28:36.59484473 +0000 UTC m=+17.447218303" lastFinishedPulling="2025-05-12 13:28:38.239708223 +0000 UTC m=+19.092081836" observedRunningTime="2025-05-12 13:28:39.291743864 +0000 UTC m=+20.144117557" watchObservedRunningTime="2025-05-12 13:28:39.292497345 +0000 UTC m=+20.144870958" May 12 13:28:42.174377 kubelet[2734]: I0512 13:28:42.174332 2734 topology_manager.go:215] "Topology Admit Handler" podUID="ec04a3da-eb3b-4261-942b-366b83318a47" podNamespace="calico-system" podName="calico-typha-8d8697c86-pvn7p" May 12 13:28:42.186567 systemd[1]: Created slice kubepods-besteffort-podec04a3da_eb3b_4261_942b_366b83318a47.slice - libcontainer container kubepods-besteffort-podec04a3da_eb3b_4261_942b_366b83318a47.slice. May 12 13:28:42.220968 kubelet[2734]: I0512 13:28:42.220469 2734 topology_manager.go:215] "Topology Admit Handler" podUID="3fb32d1c-79d2-42a4-8127-6f748bfeab58" podNamespace="calico-system" podName="calico-node-7tzp8" May 12 13:28:42.232912 systemd[1]: Created slice kubepods-besteffort-pod3fb32d1c_79d2_42a4_8127_6f748bfeab58.slice - libcontainer container kubepods-besteffort-pod3fb32d1c_79d2_42a4_8127_6f748bfeab58.slice. May 12 13:28:42.346708 kubelet[2734]: I0512 13:28:42.346667 2734 topology_manager.go:215] "Topology Admit Handler" podUID="b569786a-d324-4ae2-9a75-6a80b38875a0" podNamespace="calico-system" podName="csi-node-driver-rdgcp" May 12 13:28:42.347238 kubelet[2734]: E0512 13:28:42.346945 2734 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rdgcp" podUID="b569786a-d324-4ae2-9a75-6a80b38875a0" May 12 13:28:42.366227 kubelet[2734]: I0512 13:28:42.366190 2734 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nspgr\" (UniqueName: \"kubernetes.io/projected/ec04a3da-eb3b-4261-942b-366b83318a47-kube-api-access-nspgr\") pod \"calico-typha-8d8697c86-pvn7p\" (UID: \"ec04a3da-eb3b-4261-942b-366b83318a47\") " pod="calico-system/calico-typha-8d8697c86-pvn7p" May 12 13:28:42.366869 kubelet[2734]: I0512 13:28:42.366481 2734 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3fb32d1c-79d2-42a4-8127-6f748bfeab58-lib-modules\") pod \"calico-node-7tzp8\" (UID: \"3fb32d1c-79d2-42a4-8127-6f748bfeab58\") " pod="calico-system/calico-node-7tzp8" May 12 13:28:42.366869 kubelet[2734]: I0512 13:28:42.366506 2734 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/3fb32d1c-79d2-42a4-8127-6f748bfeab58-var-lib-calico\") pod \"calico-node-7tzp8\" (UID: \"3fb32d1c-79d2-42a4-8127-6f748bfeab58\") " pod="calico-system/calico-node-7tzp8" May 12 13:28:42.366869 kubelet[2734]: I0512 13:28:42.366588 2734 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/3fb32d1c-79d2-42a4-8127-6f748bfeab58-node-certs\") pod \"calico-node-7tzp8\" (UID: \"3fb32d1c-79d2-42a4-8127-6f748bfeab58\") " pod="calico-system/calico-node-7tzp8" May 12 13:28:42.366869 kubelet[2734]: I0512 13:28:42.366609 2734 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/3fb32d1c-79d2-42a4-8127-6f748bfeab58-flexvol-driver-host\") pod \"calico-node-7tzp8\" (UID: \"3fb32d1c-79d2-42a4-8127-6f748bfeab58\") " pod="calico-system/calico-node-7tzp8" May 12 13:28:42.366869 kubelet[2734]: I0512 13:28:42.366630 2734 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ec04a3da-eb3b-4261-942b-366b83318a47-tigera-ca-bundle\") pod \"calico-typha-8d8697c86-pvn7p\" (UID: \"ec04a3da-eb3b-4261-942b-366b83318a47\") " pod="calico-system/calico-typha-8d8697c86-pvn7p" May 12 13:28:42.367396 kubelet[2734]: I0512 13:28:42.366652 2734 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/3fb32d1c-79d2-42a4-8127-6f748bfeab58-policysync\") pod \"calico-node-7tzp8\" (UID: \"3fb32d1c-79d2-42a4-8127-6f748bfeab58\") " pod="calico-system/calico-node-7tzp8" May 12 13:28:42.367396 kubelet[2734]: I0512 13:28:42.366668 2734 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2tbd5\" (UniqueName: \"kubernetes.io/projected/3fb32d1c-79d2-42a4-8127-6f748bfeab58-kube-api-access-2tbd5\") pod \"calico-node-7tzp8\" (UID: \"3fb32d1c-79d2-42a4-8127-6f748bfeab58\") " pod="calico-system/calico-node-7tzp8" May 12 13:28:42.367396 kubelet[2734]: I0512 13:28:42.366689 2734 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/ec04a3da-eb3b-4261-942b-366b83318a47-typha-certs\") pod \"calico-typha-8d8697c86-pvn7p\" (UID: \"ec04a3da-eb3b-4261-942b-366b83318a47\") " pod="calico-system/calico-typha-8d8697c86-pvn7p" May 12 13:28:42.367396 kubelet[2734]: I0512 13:28:42.366706 2734 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3fb32d1c-79d2-42a4-8127-6f748bfeab58-xtables-lock\") pod \"calico-node-7tzp8\" (UID: \"3fb32d1c-79d2-42a4-8127-6f748bfeab58\") " pod="calico-system/calico-node-7tzp8" May 12 13:28:42.367396 kubelet[2734]: I0512 13:28:42.366730 2734 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3fb32d1c-79d2-42a4-8127-6f748bfeab58-tigera-ca-bundle\") pod \"calico-node-7tzp8\" (UID: \"3fb32d1c-79d2-42a4-8127-6f748bfeab58\") " pod="calico-system/calico-node-7tzp8" May 12 13:28:42.367499 kubelet[2734]: I0512 13:28:42.366759 2734 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/3fb32d1c-79d2-42a4-8127-6f748bfeab58-cni-log-dir\") pod \"calico-node-7tzp8\" (UID: \"3fb32d1c-79d2-42a4-8127-6f748bfeab58\") " pod="calico-system/calico-node-7tzp8" May 12 13:28:42.367499 kubelet[2734]: I0512 13:28:42.366779 2734 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/3fb32d1c-79d2-42a4-8127-6f748bfeab58-cni-bin-dir\") pod \"calico-node-7tzp8\" (UID: \"3fb32d1c-79d2-42a4-8127-6f748bfeab58\") " pod="calico-system/calico-node-7tzp8" May 12 13:28:42.367499 kubelet[2734]: I0512 13:28:42.366798 2734 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/3fb32d1c-79d2-42a4-8127-6f748bfeab58-cni-net-dir\") pod \"calico-node-7tzp8\" (UID: \"3fb32d1c-79d2-42a4-8127-6f748bfeab58\") " pod="calico-system/calico-node-7tzp8" May 12 13:28:42.367499 kubelet[2734]: I0512 13:28:42.366815 2734 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/3fb32d1c-79d2-42a4-8127-6f748bfeab58-var-run-calico\") pod \"calico-node-7tzp8\" (UID: \"3fb32d1c-79d2-42a4-8127-6f748bfeab58\") " pod="calico-system/calico-node-7tzp8" May 12 13:28:42.467735 kubelet[2734]: I0512 13:28:42.467493 2734 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/b569786a-d324-4ae2-9a75-6a80b38875a0-varrun\") pod \"csi-node-driver-rdgcp\" (UID: \"b569786a-d324-4ae2-9a75-6a80b38875a0\") " pod="calico-system/csi-node-driver-rdgcp" May 12 13:28:42.467735 kubelet[2734]: I0512 13:28:42.467537 2734 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b569786a-d324-4ae2-9a75-6a80b38875a0-kubelet-dir\") pod \"csi-node-driver-rdgcp\" (UID: \"b569786a-d324-4ae2-9a75-6a80b38875a0\") " pod="calico-system/csi-node-driver-rdgcp" May 12 13:28:42.467862 kubelet[2734]: I0512 13:28:42.467723 2734 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/b569786a-d324-4ae2-9a75-6a80b38875a0-registration-dir\") pod \"csi-node-driver-rdgcp\" (UID: \"b569786a-d324-4ae2-9a75-6a80b38875a0\") " pod="calico-system/csi-node-driver-rdgcp" May 12 13:28:42.467862 kubelet[2734]: I0512 13:28:42.467770 2734 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rkjwp\" (UniqueName: \"kubernetes.io/projected/b569786a-d324-4ae2-9a75-6a80b38875a0-kube-api-access-rkjwp\") pod \"csi-node-driver-rdgcp\" (UID: \"b569786a-d324-4ae2-9a75-6a80b38875a0\") " pod="calico-system/csi-node-driver-rdgcp" May 12 13:28:42.467862 kubelet[2734]: I0512 13:28:42.467820 2734 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/b569786a-d324-4ae2-9a75-6a80b38875a0-socket-dir\") pod \"csi-node-driver-rdgcp\" (UID: \"b569786a-d324-4ae2-9a75-6a80b38875a0\") " pod="calico-system/csi-node-driver-rdgcp" May 12 13:28:42.478648 kubelet[2734]: E0512 13:28:42.478498 2734 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 13:28:42.478648 kubelet[2734]: W0512 13:28:42.478518 2734 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 13:28:42.478648 kubelet[2734]: E0512 13:28:42.478546 2734 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 13:28:42.479487 kubelet[2734]: E0512 13:28:42.479430 2734 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 13:28:42.479487 kubelet[2734]: W0512 13:28:42.479447 2734 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 13:28:42.479487 kubelet[2734]: E0512 13:28:42.479460 2734 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 13:28:42.484753 kubelet[2734]: E0512 13:28:42.484719 2734 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 13:28:42.484753 kubelet[2734]: W0512 13:28:42.484737 2734 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 13:28:42.484856 kubelet[2734]: E0512 13:28:42.484757 2734 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 13:28:42.486542 kubelet[2734]: E0512 13:28:42.486525 2734 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 13:28:42.486658 kubelet[2734]: W0512 13:28:42.486616 2734 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 13:28:42.486658 kubelet[2734]: E0512 13:28:42.486634 2734 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 13:28:42.495058 containerd[1500]: time="2025-05-12T13:28:42.495000420Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-8d8697c86-pvn7p,Uid:ec04a3da-eb3b-4261-942b-366b83318a47,Namespace:calico-system,Attempt:0,}" May 12 13:28:42.516117 containerd[1500]: time="2025-05-12T13:28:42.516070435Z" level=info msg="connecting to shim edb0af5b78f6966c52affb368b9a9c411c944946b7d9a238e4c6d0890e2e07a8" address="unix:///run/containerd/s/950d51cc7f5ed194a0263e32847d5fb96f39514c2dc74644df4549ca72212d27" namespace=k8s.io protocol=ttrpc version=3 May 12 13:28:42.539005 containerd[1500]: time="2025-05-12T13:28:42.538930296Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-7tzp8,Uid:3fb32d1c-79d2-42a4-8127-6f748bfeab58,Namespace:calico-system,Attempt:0,}" May 12 13:28:42.552142 systemd[1]: Started cri-containerd-edb0af5b78f6966c52affb368b9a9c411c944946b7d9a238e4c6d0890e2e07a8.scope - libcontainer container edb0af5b78f6966c52affb368b9a9c411c944946b7d9a238e4c6d0890e2e07a8. May 12 13:28:42.568778 kubelet[2734]: E0512 13:28:42.568743 2734 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 13:28:42.568778 kubelet[2734]: W0512 13:28:42.568773 2734 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 13:28:42.568933 kubelet[2734]: E0512 13:28:42.568795 2734 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 13:28:42.569295 kubelet[2734]: E0512 13:28:42.569279 2734 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 13:28:42.569331 kubelet[2734]: W0512 13:28:42.569295 2734 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 13:28:42.569331 kubelet[2734]: E0512 13:28:42.569314 2734 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 13:28:42.569579 kubelet[2734]: E0512 13:28:42.569560 2734 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 13:28:42.569579 kubelet[2734]: W0512 13:28:42.569578 2734 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 13:28:42.569656 kubelet[2734]: E0512 13:28:42.569597 2734 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 13:28:42.569881 kubelet[2734]: E0512 13:28:42.569866 2734 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 13:28:42.569881 kubelet[2734]: W0512 13:28:42.569880 2734 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 13:28:42.569943 kubelet[2734]: E0512 13:28:42.569896 2734 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 13:28:42.570245 kubelet[2734]: E0512 13:28:42.570224 2734 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 13:28:42.570245 kubelet[2734]: W0512 13:28:42.570240 2734 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 13:28:42.570320 kubelet[2734]: E0512 13:28:42.570257 2734 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 13:28:42.570510 kubelet[2734]: E0512 13:28:42.570495 2734 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 13:28:42.570510 kubelet[2734]: W0512 13:28:42.570508 2734 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 13:28:42.570818 kubelet[2734]: E0512 13:28:42.570786 2734 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 13:28:42.570818 kubelet[2734]: W0512 13:28:42.570802 2734 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 13:28:42.570818 kubelet[2734]: E0512 13:28:42.570820 2734 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 13:28:42.571180 kubelet[2734]: E0512 13:28:42.571133 2734 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 13:28:42.571180 kubelet[2734]: W0512 13:28:42.571151 2734 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 13:28:42.571180 kubelet[2734]: E0512 13:28:42.571167 2734 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 13:28:42.572435 kubelet[2734]: E0512 13:28:42.571381 2734 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 13:28:42.572435 kubelet[2734]: W0512 13:28:42.571391 2734 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 13:28:42.572435 kubelet[2734]: E0512 13:28:42.571400 2734 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 13:28:42.572435 kubelet[2734]: E0512 13:28:42.571592 2734 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 13:28:42.572435 kubelet[2734]: W0512 13:28:42.571602 2734 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 13:28:42.572435 kubelet[2734]: E0512 13:28:42.571611 2734 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 13:28:42.572435 kubelet[2734]: E0512 13:28:42.571773 2734 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 13:28:42.572435 kubelet[2734]: W0512 13:28:42.571820 2734 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 13:28:42.572435 kubelet[2734]: E0512 13:28:42.571828 2734 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 13:28:42.572435 kubelet[2734]: E0512 13:28:42.571985 2734 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 13:28:42.572634 kubelet[2734]: W0512 13:28:42.571993 2734 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 13:28:42.572634 kubelet[2734]: E0512 13:28:42.572001 2734 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 13:28:42.572634 kubelet[2734]: E0512 13:28:42.572000 2734 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 13:28:42.572634 kubelet[2734]: E0512 13:28:42.572246 2734 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 13:28:42.572634 kubelet[2734]: W0512 13:28:42.572260 2734 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 13:28:42.572634 kubelet[2734]: E0512 13:28:42.572271 2734 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 13:28:42.572634 kubelet[2734]: E0512 13:28:42.572514 2734 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 13:28:42.572634 kubelet[2734]: W0512 13:28:42.572524 2734 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 13:28:42.572634 kubelet[2734]: E0512 13:28:42.572533 2734 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 13:28:42.572820 kubelet[2734]: E0512 13:28:42.572700 2734 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 13:28:42.572820 kubelet[2734]: W0512 13:28:42.572709 2734 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 13:28:42.572820 kubelet[2734]: E0512 13:28:42.572763 2734 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 13:28:42.572900 kubelet[2734]: E0512 13:28:42.572880 2734 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 13:28:42.572900 kubelet[2734]: W0512 13:28:42.572893 2734 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 13:28:42.572900 kubelet[2734]: E0512 13:28:42.572930 2734 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 13:28:42.573092 kubelet[2734]: E0512 13:28:42.573072 2734 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 13:28:42.573092 kubelet[2734]: W0512 13:28:42.573088 2734 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 13:28:42.573191 kubelet[2734]: E0512 13:28:42.573118 2734 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 13:28:42.573442 kubelet[2734]: E0512 13:28:42.573295 2734 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 13:28:42.573442 kubelet[2734]: W0512 13:28:42.573307 2734 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 13:28:42.573442 kubelet[2734]: E0512 13:28:42.573321 2734 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 13:28:42.573639 kubelet[2734]: E0512 13:28:42.573567 2734 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 13:28:42.573639 kubelet[2734]: W0512 13:28:42.573578 2734 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 13:28:42.573639 kubelet[2734]: E0512 13:28:42.573595 2734 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 13:28:42.573802 kubelet[2734]: E0512 13:28:42.573780 2734 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 13:28:42.573802 kubelet[2734]: W0512 13:28:42.573794 2734 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 13:28:42.573802 kubelet[2734]: E0512 13:28:42.573809 2734 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 13:28:42.574018 kubelet[2734]: E0512 13:28:42.573997 2734 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 13:28:42.574018 kubelet[2734]: W0512 13:28:42.574011 2734 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 13:28:42.574179 kubelet[2734]: E0512 13:28:42.574025 2734 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 13:28:42.574250 kubelet[2734]: E0512 13:28:42.574232 2734 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 13:28:42.574250 kubelet[2734]: W0512 13:28:42.574246 2734 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 13:28:42.574318 kubelet[2734]: E0512 13:28:42.574261 2734 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 13:28:42.575176 kubelet[2734]: E0512 13:28:42.575068 2734 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 13:28:42.575176 kubelet[2734]: W0512 13:28:42.575085 2734 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 13:28:42.575176 kubelet[2734]: E0512 13:28:42.575097 2734 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 13:28:42.575408 kubelet[2734]: E0512 13:28:42.575377 2734 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 13:28:42.575626 kubelet[2734]: W0512 13:28:42.575453 2734 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 13:28:42.575626 kubelet[2734]: E0512 13:28:42.575508 2734 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 13:28:42.575773 kubelet[2734]: E0512 13:28:42.575757 2734 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 13:28:42.575852 kubelet[2734]: W0512 13:28:42.575813 2734 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 13:28:42.575852 kubelet[2734]: E0512 13:28:42.575829 2734 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 13:28:42.585671 kubelet[2734]: E0512 13:28:42.585641 2734 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 13:28:42.585671 kubelet[2734]: W0512 13:28:42.585659 2734 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 13:28:42.585786 kubelet[2734]: E0512 13:28:42.585684 2734 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 13:28:42.585946 containerd[1500]: time="2025-05-12T13:28:42.585888078Z" level=info msg="connecting to shim 5f44f93e7c7f2319a8bc38c8a757b191bd41add8371c47b292b6aaff266fc8f9" address="unix:///run/containerd/s/e524b267c086fee8319c36a4daeb53b3ca400e7151636644eaf6454dc3d019a5" namespace=k8s.io protocol=ttrpc version=3 May 12 13:28:42.612137 systemd[1]: Started cri-containerd-5f44f93e7c7f2319a8bc38c8a757b191bd41add8371c47b292b6aaff266fc8f9.scope - libcontainer container 5f44f93e7c7f2319a8bc38c8a757b191bd41add8371c47b292b6aaff266fc8f9. May 12 13:28:42.632601 containerd[1500]: time="2025-05-12T13:28:42.632532805Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-8d8697c86-pvn7p,Uid:ec04a3da-eb3b-4261-942b-366b83318a47,Namespace:calico-system,Attempt:0,} returns sandbox id \"edb0af5b78f6966c52affb368b9a9c411c944946b7d9a238e4c6d0890e2e07a8\"" May 12 13:28:42.637185 containerd[1500]: time="2025-05-12T13:28:42.636813531Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\"" May 12 13:28:42.638448 containerd[1500]: time="2025-05-12T13:28:42.638417008Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-7tzp8,Uid:3fb32d1c-79d2-42a4-8127-6f748bfeab58,Namespace:calico-system,Attempt:0,} returns sandbox id \"5f44f93e7c7f2319a8bc38c8a757b191bd41add8371c47b292b6aaff266fc8f9\"" May 12 13:28:43.985425 containerd[1500]: time="2025-05-12T13:28:43.985375210Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 13:28:43.986047 containerd[1500]: time="2025-05-12T13:28:43.986012520Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.3: active requests=0, bytes read=28370571" May 12 13:28:43.986738 containerd[1500]: time="2025-05-12T13:28:43.986702432Z" level=info msg="ImageCreate event name:\"sha256:26e730979a07ea7452715da6ac48076016018bc982c06ebd32d5e095f42d3d54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 13:28:43.988866 containerd[1500]: time="2025-05-12T13:28:43.988827050Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 13:28:43.989525 containerd[1500]: time="2025-05-12T13:28:43.989488280Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.3\" with image id \"sha256:26e730979a07ea7452715da6ac48076016018bc982c06ebd32d5e095f42d3d54\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\", size \"29739745\" in 1.352623267s" May 12 13:28:43.989560 containerd[1500]: time="2025-05-12T13:28:43.989524642Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\" returns image reference \"sha256:26e730979a07ea7452715da6ac48076016018bc982c06ebd32d5e095f42d3d54\"" May 12 13:28:43.990450 containerd[1500]: time="2025-05-12T13:28:43.990265556Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\"" May 12 13:28:44.004976 containerd[1500]: time="2025-05-12T13:28:44.004923465Z" level=info msg="CreateContainer within sandbox \"edb0af5b78f6966c52affb368b9a9c411c944946b7d9a238e4c6d0890e2e07a8\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" May 12 13:28:44.010653 containerd[1500]: time="2025-05-12T13:28:44.010613957Z" level=info msg="Container 848afd13fe48b7c39b8768c9ea0e9dde20592bcfcb4211594c87f5d37d3f12dd: CDI devices from CRI Config.CDIDevices: []" May 12 13:28:44.026519 containerd[1500]: time="2025-05-12T13:28:44.026450577Z" level=info msg="CreateContainer within sandbox \"edb0af5b78f6966c52affb368b9a9c411c944946b7d9a238e4c6d0890e2e07a8\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"848afd13fe48b7c39b8768c9ea0e9dde20592bcfcb4211594c87f5d37d3f12dd\"" May 12 13:28:44.027146 containerd[1500]: time="2025-05-12T13:28:44.027117567Z" level=info msg="StartContainer for \"848afd13fe48b7c39b8768c9ea0e9dde20592bcfcb4211594c87f5d37d3f12dd\"" May 12 13:28:44.028177 containerd[1500]: time="2025-05-12T13:28:44.028148292Z" level=info msg="connecting to shim 848afd13fe48b7c39b8768c9ea0e9dde20592bcfcb4211594c87f5d37d3f12dd" address="unix:///run/containerd/s/950d51cc7f5ed194a0263e32847d5fb96f39514c2dc74644df4549ca72212d27" protocol=ttrpc version=3 May 12 13:28:44.050121 systemd[1]: Started cri-containerd-848afd13fe48b7c39b8768c9ea0e9dde20592bcfcb4211594c87f5d37d3f12dd.scope - libcontainer container 848afd13fe48b7c39b8768c9ea0e9dde20592bcfcb4211594c87f5d37d3f12dd. May 12 13:28:44.083801 containerd[1500]: time="2025-05-12T13:28:44.083753872Z" level=info msg="StartContainer for \"848afd13fe48b7c39b8768c9ea0e9dde20592bcfcb4211594c87f5d37d3f12dd\" returns successfully" May 12 13:28:44.228231 kubelet[2734]: E0512 13:28:44.228161 2734 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rdgcp" podUID="b569786a-d324-4ae2-9a75-6a80b38875a0" May 12 13:28:44.319116 kubelet[2734]: I0512 13:28:44.318977 2734 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-8d8697c86-pvn7p" podStartSLOduration=0.962727359 podStartE2EDuration="2.318945717s" podCreationTimestamp="2025-05-12 13:28:42 +0000 UTC" firstStartedPulling="2025-05-12 13:28:42.633964154 +0000 UTC m=+23.486337727" lastFinishedPulling="2025-05-12 13:28:43.990182472 +0000 UTC m=+24.842556085" observedRunningTime="2025-05-12 13:28:44.317651019 +0000 UTC m=+25.170024632" watchObservedRunningTime="2025-05-12 13:28:44.318945717 +0000 UTC m=+25.171319370" May 12 13:28:44.381721 kubelet[2734]: E0512 13:28:44.381673 2734 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 13:28:44.381721 kubelet[2734]: W0512 13:28:44.381700 2734 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 13:28:44.381721 kubelet[2734]: E0512 13:28:44.381719 2734 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 13:28:44.381974 kubelet[2734]: E0512 13:28:44.381884 2734 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 13:28:44.381974 kubelet[2734]: W0512 13:28:44.381892 2734 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 13:28:44.381974 kubelet[2734]: E0512 13:28:44.381901 2734 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 13:28:44.382074 kubelet[2734]: E0512 13:28:44.382061 2734 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 13:28:44.382074 kubelet[2734]: W0512 13:28:44.382068 2734 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 13:28:44.382140 kubelet[2734]: E0512 13:28:44.382077 2734 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 13:28:44.382305 kubelet[2734]: E0512 13:28:44.382285 2734 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 13:28:44.382305 kubelet[2734]: W0512 13:28:44.382297 2734 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 13:28:44.382361 kubelet[2734]: E0512 13:28:44.382306 2734 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 13:28:44.382635 kubelet[2734]: E0512 13:28:44.382467 2734 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 13:28:44.382635 kubelet[2734]: W0512 13:28:44.382477 2734 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 13:28:44.382635 kubelet[2734]: E0512 13:28:44.382485 2734 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 13:28:44.382757 kubelet[2734]: E0512 13:28:44.382691 2734 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 13:28:44.382757 kubelet[2734]: W0512 13:28:44.382700 2734 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 13:28:44.382757 kubelet[2734]: E0512 13:28:44.382708 2734 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 13:28:44.382944 kubelet[2734]: E0512 13:28:44.382884 2734 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 13:28:44.382944 kubelet[2734]: W0512 13:28:44.382896 2734 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 13:28:44.382944 kubelet[2734]: E0512 13:28:44.382904 2734 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 13:28:44.383150 kubelet[2734]: E0512 13:28:44.383115 2734 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 13:28:44.383150 kubelet[2734]: W0512 13:28:44.383123 2734 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 13:28:44.383150 kubelet[2734]: E0512 13:28:44.383132 2734 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 13:28:44.383300 kubelet[2734]: E0512 13:28:44.383283 2734 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 13:28:44.383300 kubelet[2734]: W0512 13:28:44.383294 2734 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 13:28:44.383300 kubelet[2734]: E0512 13:28:44.383302 2734 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 13:28:44.383436 kubelet[2734]: E0512 13:28:44.383423 2734 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 13:28:44.383436 kubelet[2734]: W0512 13:28:44.383434 2734 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 13:28:44.383491 kubelet[2734]: E0512 13:28:44.383449 2734 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 13:28:44.383734 kubelet[2734]: E0512 13:28:44.383573 2734 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 13:28:44.383734 kubelet[2734]: W0512 13:28:44.383583 2734 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 13:28:44.383734 kubelet[2734]: E0512 13:28:44.383590 2734 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 13:28:44.383734 kubelet[2734]: E0512 13:28:44.383730 2734 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 13:28:44.383870 kubelet[2734]: W0512 13:28:44.383743 2734 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 13:28:44.383870 kubelet[2734]: E0512 13:28:44.383751 2734 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 13:28:44.384005 kubelet[2734]: E0512 13:28:44.383966 2734 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 13:28:44.384005 kubelet[2734]: W0512 13:28:44.383977 2734 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 13:28:44.384005 kubelet[2734]: E0512 13:28:44.383985 2734 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 13:28:44.384150 kubelet[2734]: E0512 13:28:44.384138 2734 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 13:28:44.384150 kubelet[2734]: W0512 13:28:44.384147 2734 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 13:28:44.384200 kubelet[2734]: E0512 13:28:44.384155 2734 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 13:28:44.384294 kubelet[2734]: E0512 13:28:44.384283 2734 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 13:28:44.384294 kubelet[2734]: W0512 13:28:44.384292 2734 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 13:28:44.384342 kubelet[2734]: E0512 13:28:44.384299 2734 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 13:28:44.384984 kubelet[2734]: E0512 13:28:44.384774 2734 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 13:28:44.384984 kubelet[2734]: W0512 13:28:44.384798 2734 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 13:28:44.384984 kubelet[2734]: E0512 13:28:44.384807 2734 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 13:28:44.388012 kubelet[2734]: E0512 13:28:44.385235 2734 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 13:28:44.388012 kubelet[2734]: W0512 13:28:44.385255 2734 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 13:28:44.388012 kubelet[2734]: E0512 13:28:44.385271 2734 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 13:28:44.388012 kubelet[2734]: E0512 13:28:44.385467 2734 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 13:28:44.388012 kubelet[2734]: W0512 13:28:44.385477 2734 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 13:28:44.388012 kubelet[2734]: E0512 13:28:44.385493 2734 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 13:28:44.388012 kubelet[2734]: E0512 13:28:44.387033 2734 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 13:28:44.388012 kubelet[2734]: W0512 13:28:44.387047 2734 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 13:28:44.388012 kubelet[2734]: E0512 13:28:44.387062 2734 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 13:28:44.388012 kubelet[2734]: E0512 13:28:44.387261 2734 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 13:28:44.388284 kubelet[2734]: W0512 13:28:44.387270 2734 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 13:28:44.388284 kubelet[2734]: E0512 13:28:44.387280 2734 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 13:28:44.388284 kubelet[2734]: E0512 13:28:44.387477 2734 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 13:28:44.388284 kubelet[2734]: W0512 13:28:44.387486 2734 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 13:28:44.388284 kubelet[2734]: E0512 13:28:44.387496 2734 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 13:28:44.388284 kubelet[2734]: E0512 13:28:44.387665 2734 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 13:28:44.388284 kubelet[2734]: W0512 13:28:44.387674 2734 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 13:28:44.388284 kubelet[2734]: E0512 13:28:44.387682 2734 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 13:28:44.388284 kubelet[2734]: E0512 13:28:44.388034 2734 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 13:28:44.388284 kubelet[2734]: W0512 13:28:44.388046 2734 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 13:28:44.388513 kubelet[2734]: E0512 13:28:44.388077 2734 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 13:28:44.388513 kubelet[2734]: E0512 13:28:44.388362 2734 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 13:28:44.388513 kubelet[2734]: W0512 13:28:44.388374 2734 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 13:28:44.388513 kubelet[2734]: E0512 13:28:44.388505 2734 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 13:28:44.388513 kubelet[2734]: W0512 13:28:44.388511 2734 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 13:28:44.388626 kubelet[2734]: E0512 13:28:44.388525 2734 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 13:28:44.388652 kubelet[2734]: E0512 13:28:44.388633 2734 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 13:28:44.388723 kubelet[2734]: E0512 13:28:44.388706 2734 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 13:28:44.388723 kubelet[2734]: W0512 13:28:44.388717 2734 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 13:28:44.388782 kubelet[2734]: E0512 13:28:44.388729 2734 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 13:28:44.388904 kubelet[2734]: E0512 13:28:44.388886 2734 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 13:28:44.388904 kubelet[2734]: W0512 13:28:44.388900 2734 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 13:28:44.388977 kubelet[2734]: E0512 13:28:44.388910 2734 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 13:28:44.389486 kubelet[2734]: E0512 13:28:44.389468 2734 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 13:28:44.389486 kubelet[2734]: W0512 13:28:44.389486 2734 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 13:28:44.389564 kubelet[2734]: E0512 13:28:44.389503 2734 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 13:28:44.389715 kubelet[2734]: E0512 13:28:44.389702 2734 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 13:28:44.389715 kubelet[2734]: W0512 13:28:44.389714 2734 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 13:28:44.389823 kubelet[2734]: E0512 13:28:44.389728 2734 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 13:28:44.389925 kubelet[2734]: E0512 13:28:44.389911 2734 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 13:28:44.389925 kubelet[2734]: W0512 13:28:44.389923 2734 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 13:28:44.389996 kubelet[2734]: E0512 13:28:44.389962 2734 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 13:28:44.390483 kubelet[2734]: E0512 13:28:44.390208 2734 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 13:28:44.390483 kubelet[2734]: W0512 13:28:44.390223 2734 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 13:28:44.390483 kubelet[2734]: E0512 13:28:44.390242 2734 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 13:28:44.390483 kubelet[2734]: E0512 13:28:44.390474 2734 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 13:28:44.390483 kubelet[2734]: W0512 13:28:44.390486 2734 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 13:28:44.390625 kubelet[2734]: E0512 13:28:44.390502 2734 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 13:28:44.390666 kubelet[2734]: E0512 13:28:44.390650 2734 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 12 13:28:44.390666 kubelet[2734]: W0512 13:28:44.390660 2734 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 12 13:28:44.390714 kubelet[2734]: E0512 13:28:44.390668 2734 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 12 13:28:45.202183 containerd[1500]: time="2025-05-12T13:28:45.202134589Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 13:28:45.202847 containerd[1500]: time="2025-05-12T13:28:45.202814658Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3: active requests=0, bytes read=5122903" May 12 13:28:45.203721 containerd[1500]: time="2025-05-12T13:28:45.203685455Z" level=info msg="ImageCreate event name:\"sha256:dd8e710a588cc6f5834c4d84f7e12458efae593d3dfe527ca9e757c89239ecb8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 13:28:45.205600 containerd[1500]: time="2025-05-12T13:28:45.205567895Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 13:28:45.206519 containerd[1500]: time="2025-05-12T13:28:45.206487294Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" with image id \"sha256:dd8e710a588cc6f5834c4d84f7e12458efae593d3dfe527ca9e757c89239ecb8\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\", size \"6492045\" in 1.216192016s" May 12 13:28:45.206550 containerd[1500]: time="2025-05-12T13:28:45.206520215Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" returns image reference \"sha256:dd8e710a588cc6f5834c4d84f7e12458efae593d3dfe527ca9e757c89239ecb8\"" May 12 13:28:45.210358 containerd[1500]: time="2025-05-12T13:28:45.210309496Z" level=info msg="CreateContainer within sandbox \"5f44f93e7c7f2319a8bc38c8a757b191bd41add8371c47b292b6aaff266fc8f9\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 12 13:28:45.217404 containerd[1500]: time="2025-05-12T13:28:45.217365435Z" level=info msg="Container 41abc7f054a8e46363ca79552a9b387afef0be3ac449cba019c701387b5a5da7: CDI devices from CRI Config.CDIDevices: []" May 12 13:28:45.228081 containerd[1500]: time="2025-05-12T13:28:45.228032648Z" level=info msg="CreateContainer within sandbox \"5f44f93e7c7f2319a8bc38c8a757b191bd41add8371c47b292b6aaff266fc8f9\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"41abc7f054a8e46363ca79552a9b387afef0be3ac449cba019c701387b5a5da7\"" May 12 13:28:45.228889 containerd[1500]: time="2025-05-12T13:28:45.228852763Z" level=info msg="StartContainer for \"41abc7f054a8e46363ca79552a9b387afef0be3ac449cba019c701387b5a5da7\"" May 12 13:28:45.231238 containerd[1500]: time="2025-05-12T13:28:45.230257743Z" level=info msg="connecting to shim 41abc7f054a8e46363ca79552a9b387afef0be3ac449cba019c701387b5a5da7" address="unix:///run/containerd/s/e524b267c086fee8319c36a4daeb53b3ca400e7151636644eaf6454dc3d019a5" protocol=ttrpc version=3 May 12 13:28:45.262123 systemd[1]: Started cri-containerd-41abc7f054a8e46363ca79552a9b387afef0be3ac449cba019c701387b5a5da7.scope - libcontainer container 41abc7f054a8e46363ca79552a9b387afef0be3ac449cba019c701387b5a5da7. May 12 13:28:45.314431 systemd[1]: cri-containerd-41abc7f054a8e46363ca79552a9b387afef0be3ac449cba019c701387b5a5da7.scope: Deactivated successfully. May 12 13:28:45.315993 systemd[1]: cri-containerd-41abc7f054a8e46363ca79552a9b387afef0be3ac449cba019c701387b5a5da7.scope: Consumed 38ms CPU time, 8M memory peak, 6.2M written to disk. May 12 13:28:45.347901 containerd[1500]: time="2025-05-12T13:28:45.347837015Z" level=info msg="TaskExit event in podsandbox handler container_id:\"41abc7f054a8e46363ca79552a9b387afef0be3ac449cba019c701387b5a5da7\" id:\"41abc7f054a8e46363ca79552a9b387afef0be3ac449cba019c701387b5a5da7\" pid:3356 exited_at:{seconds:1747056525 nanos:341687034}" May 12 13:28:45.352069 containerd[1500]: time="2025-05-12T13:28:45.352036633Z" level=info msg="StartContainer for \"41abc7f054a8e46363ca79552a9b387afef0be3ac449cba019c701387b5a5da7\" returns successfully" May 12 13:28:45.354984 kubelet[2734]: I0512 13:28:45.354719 2734 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 12 13:28:45.358431 containerd[1500]: time="2025-05-12T13:28:45.358384063Z" level=info msg="received exit event container_id:\"41abc7f054a8e46363ca79552a9b387afef0be3ac449cba019c701387b5a5da7\" id:\"41abc7f054a8e46363ca79552a9b387afef0be3ac449cba019c701387b5a5da7\" pid:3356 exited_at:{seconds:1747056525 nanos:341687034}" May 12 13:28:45.390813 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-41abc7f054a8e46363ca79552a9b387afef0be3ac449cba019c701387b5a5da7-rootfs.mount: Deactivated successfully. May 12 13:28:46.229986 kubelet[2734]: E0512 13:28:46.228940 2734 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rdgcp" podUID="b569786a-d324-4ae2-9a75-6a80b38875a0" May 12 13:28:46.359845 containerd[1500]: time="2025-05-12T13:28:46.359784739Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\"" May 12 13:28:48.228314 kubelet[2734]: E0512 13:28:48.228260 2734 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rdgcp" podUID="b569786a-d324-4ae2-9a75-6a80b38875a0" May 12 13:28:49.629687 containerd[1500]: time="2025-05-12T13:28:49.629588768Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 13:28:49.630997 containerd[1500]: time="2025-05-12T13:28:49.630967299Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.3: active requests=0, bytes read=91256270" May 12 13:28:49.631632 containerd[1500]: time="2025-05-12T13:28:49.631606842Z" level=info msg="ImageCreate event name:\"sha256:add6372545fb406bb017769f222d84c50549ce13e3b19f1fbaee3d8a4aaef627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 13:28:49.634141 containerd[1500]: time="2025-05-12T13:28:49.634091972Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 13:28:49.634807 containerd[1500]: time="2025-05-12T13:28:49.634761677Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.3\" with image id \"sha256:add6372545fb406bb017769f222d84c50549ce13e3b19f1fbaee3d8a4aaef627\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\", size \"92625452\" in 3.274921215s" May 12 13:28:49.634807 containerd[1500]: time="2025-05-12T13:28:49.634797678Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\" returns image reference \"sha256:add6372545fb406bb017769f222d84c50549ce13e3b19f1fbaee3d8a4aaef627\"" May 12 13:28:49.638943 containerd[1500]: time="2025-05-12T13:28:49.638898867Z" level=info msg="CreateContainer within sandbox \"5f44f93e7c7f2319a8bc38c8a757b191bd41add8371c47b292b6aaff266fc8f9\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 12 13:28:49.646271 containerd[1500]: time="2025-05-12T13:28:49.645227257Z" level=info msg="Container 6eb86956552133b5eba782dfac2905a14f1fd86409e61f5958d5d8b63d292a6c: CDI devices from CRI Config.CDIDevices: []" May 12 13:28:49.653625 containerd[1500]: time="2025-05-12T13:28:49.653591922Z" level=info msg="CreateContainer within sandbox \"5f44f93e7c7f2319a8bc38c8a757b191bd41add8371c47b292b6aaff266fc8f9\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"6eb86956552133b5eba782dfac2905a14f1fd86409e61f5958d5d8b63d292a6c\"" May 12 13:28:49.654104 containerd[1500]: time="2025-05-12T13:28:49.654078219Z" level=info msg="StartContainer for \"6eb86956552133b5eba782dfac2905a14f1fd86409e61f5958d5d8b63d292a6c\"" May 12 13:28:49.655687 containerd[1500]: time="2025-05-12T13:28:49.655610195Z" level=info msg="connecting to shim 6eb86956552133b5eba782dfac2905a14f1fd86409e61f5958d5d8b63d292a6c" address="unix:///run/containerd/s/e524b267c086fee8319c36a4daeb53b3ca400e7151636644eaf6454dc3d019a5" protocol=ttrpc version=3 May 12 13:28:49.679198 systemd[1]: Started cri-containerd-6eb86956552133b5eba782dfac2905a14f1fd86409e61f5958d5d8b63d292a6c.scope - libcontainer container 6eb86956552133b5eba782dfac2905a14f1fd86409e61f5958d5d8b63d292a6c. May 12 13:28:49.750178 containerd[1500]: time="2025-05-12T13:28:49.745689952Z" level=info msg="StartContainer for \"6eb86956552133b5eba782dfac2905a14f1fd86409e61f5958d5d8b63d292a6c\" returns successfully" May 12 13:28:50.180440 systemd[1]: Started sshd@7-10.0.0.76:22-10.0.0.1:53182.service - OpenSSH per-connection server daemon (10.0.0.1:53182). May 12 13:28:50.228463 kubelet[2734]: E0512 13:28:50.228129 2734 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rdgcp" podUID="b569786a-d324-4ae2-9a75-6a80b38875a0" May 12 13:28:50.287141 sshd[3434]: Accepted publickey for core from 10.0.0.1 port 53182 ssh2: RSA SHA256:jEPoW5jmVqQGUqKP3XswdpHQkuwhsPJWJAB8YbEjhZ8 May 12 13:28:50.288326 sshd-session[3434]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 12 13:28:50.296020 systemd-logind[1472]: New session 8 of user core. May 12 13:28:50.304157 systemd[1]: Started session-8.scope - Session 8 of User core. May 12 13:28:50.425398 containerd[1500]: time="2025-05-12T13:28:50.425358247Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 12 13:28:50.432073 systemd[1]: cri-containerd-6eb86956552133b5eba782dfac2905a14f1fd86409e61f5958d5d8b63d292a6c.scope: Deactivated successfully. May 12 13:28:50.432575 containerd[1500]: time="2025-05-12T13:28:50.432538379Z" level=info msg="received exit event container_id:\"6eb86956552133b5eba782dfac2905a14f1fd86409e61f5958d5d8b63d292a6c\" id:\"6eb86956552133b5eba782dfac2905a14f1fd86409e61f5958d5d8b63d292a6c\" pid:3413 exited_at:{seconds:1747056530 nanos:430609871}" May 12 13:28:50.435047 systemd[1]: cri-containerd-6eb86956552133b5eba782dfac2905a14f1fd86409e61f5958d5d8b63d292a6c.scope: Consumed 485ms CPU time, 160.7M memory peak, 48K read from disk, 150.3M written to disk. May 12 13:28:50.439494 containerd[1500]: time="2025-05-12T13:28:50.439456701Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6eb86956552133b5eba782dfac2905a14f1fd86409e61f5958d5d8b63d292a6c\" id:\"6eb86956552133b5eba782dfac2905a14f1fd86409e61f5958d5d8b63d292a6c\" pid:3413 exited_at:{seconds:1747056530 nanos:430609871}" May 12 13:28:50.443216 sshd[3436]: Connection closed by 10.0.0.1 port 53182 May 12 13:28:50.443516 sshd-session[3434]: pam_unix(sshd:session): session closed for user core May 12 13:28:50.444996 kubelet[2734]: I0512 13:28:50.444002 2734 kubelet_node_status.go:497] "Fast updating node status as it just became ready" May 12 13:28:50.449119 systemd[1]: sshd@7-10.0.0.76:22-10.0.0.1:53182.service: Deactivated successfully. May 12 13:28:50.450674 systemd[1]: session-8.scope: Deactivated successfully. May 12 13:28:50.454667 systemd-logind[1472]: Session 8 logged out. Waiting for processes to exit. May 12 13:28:50.456104 systemd-logind[1472]: Removed session 8. May 12 13:28:50.467618 kubelet[2734]: I0512 13:28:50.464418 2734 topology_manager.go:215] "Topology Admit Handler" podUID="bccbcfb8-e672-441e-9343-2cea5d3395e6" podNamespace="kube-system" podName="coredns-7db6d8ff4d-7hbtm" May 12 13:28:50.466305 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6eb86956552133b5eba782dfac2905a14f1fd86409e61f5958d5d8b63d292a6c-rootfs.mount: Deactivated successfully. May 12 13:28:50.470824 kubelet[2734]: I0512 13:28:50.470620 2734 topology_manager.go:215] "Topology Admit Handler" podUID="22ec1171-e30f-460d-b307-c9c15268f694" podNamespace="kube-system" podName="coredns-7db6d8ff4d-5xc5z" May 12 13:28:50.470926 kubelet[2734]: I0512 13:28:50.470820 2734 topology_manager.go:215] "Topology Admit Handler" podUID="c875fed8-1144-4edd-becd-9872677e567b" podNamespace="calico-system" podName="calico-kube-controllers-6575dcdf85-mqr66" May 12 13:28:50.471583 kubelet[2734]: I0512 13:28:50.471156 2734 topology_manager.go:215] "Topology Admit Handler" podUID="223e65e8-dd23-4599-ac1d-826d7fd16ccd" podNamespace="calico-apiserver" podName="calico-apiserver-5599775c9-c8whr" May 12 13:28:50.474650 kubelet[2734]: I0512 13:28:50.473220 2734 topology_manager.go:215] "Topology Admit Handler" podUID="2d751f48-caa2-4e41-89a9-f6a5fe02729e" podNamespace="calico-apiserver" podName="calico-apiserver-5599775c9-pcf4c" May 12 13:28:50.474650 kubelet[2734]: I0512 13:28:50.473363 2734 topology_manager.go:215] "Topology Admit Handler" podUID="0e405ba7-374c-45e2-9ff8-52063ef948ea" podNamespace="calico-apiserver" podName="calico-apiserver-565df67cbb-w69v8" May 12 13:28:50.486344 systemd[1]: Created slice kubepods-burstable-podbccbcfb8_e672_441e_9343_2cea5d3395e6.slice - libcontainer container kubepods-burstable-podbccbcfb8_e672_441e_9343_2cea5d3395e6.slice. May 12 13:28:50.491382 systemd[1]: Created slice kubepods-burstable-pod22ec1171_e30f_460d_b307_c9c15268f694.slice - libcontainer container kubepods-burstable-pod22ec1171_e30f_460d_b307_c9c15268f694.slice. May 12 13:28:50.499187 systemd[1]: Created slice kubepods-besteffort-podc875fed8_1144_4edd_becd_9872677e567b.slice - libcontainer container kubepods-besteffort-podc875fed8_1144_4edd_becd_9872677e567b.slice. May 12 13:28:50.505867 systemd[1]: Created slice kubepods-besteffort-pod223e65e8_dd23_4599_ac1d_826d7fd16ccd.slice - libcontainer container kubepods-besteffort-pod223e65e8_dd23_4599_ac1d_826d7fd16ccd.slice. May 12 13:28:50.518033 systemd[1]: Created slice kubepods-besteffort-pod0e405ba7_374c_45e2_9ff8_52063ef948ea.slice - libcontainer container kubepods-besteffort-pod0e405ba7_374c_45e2_9ff8_52063ef948ea.slice. May 12 13:28:50.528165 systemd[1]: Created slice kubepods-besteffort-pod2d751f48_caa2_4e41_89a9_f6a5fe02729e.slice - libcontainer container kubepods-besteffort-pod2d751f48_caa2_4e41_89a9_f6a5fe02729e.slice. May 12 13:28:50.528768 kubelet[2734]: I0512 13:28:50.528706 2734 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bccbcfb8-e672-441e-9343-2cea5d3395e6-config-volume\") pod \"coredns-7db6d8ff4d-7hbtm\" (UID: \"bccbcfb8-e672-441e-9343-2cea5d3395e6\") " pod="kube-system/coredns-7db6d8ff4d-7hbtm" May 12 13:28:50.529004 kubelet[2734]: I0512 13:28:50.528898 2734 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dx8pm\" (UniqueName: \"kubernetes.io/projected/bccbcfb8-e672-441e-9343-2cea5d3395e6-kube-api-access-dx8pm\") pod \"coredns-7db6d8ff4d-7hbtm\" (UID: \"bccbcfb8-e672-441e-9343-2cea5d3395e6\") " pod="kube-system/coredns-7db6d8ff4d-7hbtm" May 12 13:28:50.630069 kubelet[2734]: I0512 13:28:50.630022 2734 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9fsph\" (UniqueName: \"kubernetes.io/projected/0e405ba7-374c-45e2-9ff8-52063ef948ea-kube-api-access-9fsph\") pod \"calico-apiserver-565df67cbb-w69v8\" (UID: \"0e405ba7-374c-45e2-9ff8-52063ef948ea\") " pod="calico-apiserver/calico-apiserver-565df67cbb-w69v8" May 12 13:28:50.630069 kubelet[2734]: I0512 13:28:50.630075 2734 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/223e65e8-dd23-4599-ac1d-826d7fd16ccd-calico-apiserver-certs\") pod \"calico-apiserver-5599775c9-c8whr\" (UID: \"223e65e8-dd23-4599-ac1d-826d7fd16ccd\") " pod="calico-apiserver/calico-apiserver-5599775c9-c8whr" May 12 13:28:50.630219 kubelet[2734]: I0512 13:28:50.630102 2734 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mzkrl\" (UniqueName: \"kubernetes.io/projected/c875fed8-1144-4edd-becd-9872677e567b-kube-api-access-mzkrl\") pod \"calico-kube-controllers-6575dcdf85-mqr66\" (UID: \"c875fed8-1144-4edd-becd-9872677e567b\") " pod="calico-system/calico-kube-controllers-6575dcdf85-mqr66" May 12 13:28:50.630219 kubelet[2734]: I0512 13:28:50.630126 2734 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/22ec1171-e30f-460d-b307-c9c15268f694-config-volume\") pod \"coredns-7db6d8ff4d-5xc5z\" (UID: \"22ec1171-e30f-460d-b307-c9c15268f694\") " pod="kube-system/coredns-7db6d8ff4d-5xc5z" May 12 13:28:50.630274 kubelet[2734]: I0512 13:28:50.630240 2734 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbpd2\" (UniqueName: \"kubernetes.io/projected/22ec1171-e30f-460d-b307-c9c15268f694-kube-api-access-lbpd2\") pod \"coredns-7db6d8ff4d-5xc5z\" (UID: \"22ec1171-e30f-460d-b307-c9c15268f694\") " pod="kube-system/coredns-7db6d8ff4d-5xc5z" May 12 13:28:50.630427 kubelet[2734]: I0512 13:28:50.630380 2734 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c875fed8-1144-4edd-becd-9872677e567b-tigera-ca-bundle\") pod \"calico-kube-controllers-6575dcdf85-mqr66\" (UID: \"c875fed8-1144-4edd-becd-9872677e567b\") " pod="calico-system/calico-kube-controllers-6575dcdf85-mqr66" May 12 13:28:50.631472 kubelet[2734]: I0512 13:28:50.631428 2734 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zp56\" (UniqueName: \"kubernetes.io/projected/223e65e8-dd23-4599-ac1d-826d7fd16ccd-kube-api-access-9zp56\") pod \"calico-apiserver-5599775c9-c8whr\" (UID: \"223e65e8-dd23-4599-ac1d-826d7fd16ccd\") " pod="calico-apiserver/calico-apiserver-5599775c9-c8whr" May 12 13:28:50.631529 kubelet[2734]: I0512 13:28:50.631515 2734 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/2d751f48-caa2-4e41-89a9-f6a5fe02729e-calico-apiserver-certs\") pod \"calico-apiserver-5599775c9-pcf4c\" (UID: \"2d751f48-caa2-4e41-89a9-f6a5fe02729e\") " pod="calico-apiserver/calico-apiserver-5599775c9-pcf4c" May 12 13:28:50.631559 kubelet[2734]: I0512 13:28:50.631551 2734 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ldxqn\" (UniqueName: \"kubernetes.io/projected/2d751f48-caa2-4e41-89a9-f6a5fe02729e-kube-api-access-ldxqn\") pod \"calico-apiserver-5599775c9-pcf4c\" (UID: \"2d751f48-caa2-4e41-89a9-f6a5fe02729e\") " pod="calico-apiserver/calico-apiserver-5599775c9-pcf4c" May 12 13:28:50.632107 kubelet[2734]: I0512 13:28:50.632035 2734 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/0e405ba7-374c-45e2-9ff8-52063ef948ea-calico-apiserver-certs\") pod \"calico-apiserver-565df67cbb-w69v8\" (UID: \"0e405ba7-374c-45e2-9ff8-52063ef948ea\") " pod="calico-apiserver/calico-apiserver-565df67cbb-w69v8" May 12 13:28:50.803081 containerd[1500]: time="2025-05-12T13:28:50.802914733Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-5xc5z,Uid:22ec1171-e30f-460d-b307-c9c15268f694,Namespace:kube-system,Attempt:0,}" May 12 13:28:50.803081 containerd[1500]: time="2025-05-12T13:28:50.803070338Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6575dcdf85-mqr66,Uid:c875fed8-1144-4edd-becd-9872677e567b,Namespace:calico-system,Attempt:0,}" May 12 13:28:50.803430 containerd[1500]: time="2025-05-12T13:28:50.803017337Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-7hbtm,Uid:bccbcfb8-e672-441e-9343-2cea5d3395e6,Namespace:kube-system,Attempt:0,}" May 12 13:28:50.821946 containerd[1500]: time="2025-05-12T13:28:50.821886879Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5599775c9-c8whr,Uid:223e65e8-dd23-4599-ac1d-826d7fd16ccd,Namespace:calico-apiserver,Attempt:0,}" May 12 13:28:50.851031 containerd[1500]: time="2025-05-12T13:28:50.850764572Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5599775c9-pcf4c,Uid:2d751f48-caa2-4e41-89a9-f6a5fe02729e,Namespace:calico-apiserver,Attempt:0,}" May 12 13:28:50.856796 containerd[1500]: time="2025-05-12T13:28:50.856773783Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-565df67cbb-w69v8,Uid:0e405ba7-374c-45e2-9ff8-52063ef948ea,Namespace:calico-apiserver,Attempt:0,}" May 12 13:28:51.111875 containerd[1500]: time="2025-05-12T13:28:51.111722833Z" level=error msg="Failed to destroy network for sandbox \"d71664f53845f60b358deb879130bfb5ec4183e82c243ddea9dbca2087c316bf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 12 13:28:51.112409 containerd[1500]: time="2025-05-12T13:28:51.112361015Z" level=error msg="Failed to destroy network for sandbox \"6e12f4a6c9ac662aa2e0683a222f71bd5e36f0878180dd1566d21a5a45184521\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 12 13:28:51.115898 containerd[1500]: time="2025-05-12T13:28:51.115837333Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5599775c9-c8whr,Uid:223e65e8-dd23-4599-ac1d-826d7fd16ccd,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d71664f53845f60b358deb879130bfb5ec4183e82c243ddea9dbca2087c316bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 12 13:28:51.116654 containerd[1500]: time="2025-05-12T13:28:51.116446433Z" level=error msg="Failed to destroy network for sandbox \"691a9b0400f9b5aba982e5b789e52f84f587c3fedf187f6e8b5cc7aacdf9aa18\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 12 13:28:51.117426 containerd[1500]: time="2025-05-12T13:28:51.117387025Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-5xc5z,Uid:22ec1171-e30f-460d-b307-c9c15268f694,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6e12f4a6c9ac662aa2e0683a222f71bd5e36f0878180dd1566d21a5a45184521\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 12 13:28:51.117824 containerd[1500]: time="2025-05-12T13:28:51.117795999Z" level=error msg="Failed to destroy network for sandbox \"3108c10215c3b7b1f457cae5d1e56226e507c661b3bdedaffe0ec9323fc557a2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 12 13:28:51.118107 kubelet[2734]: E0512 13:28:51.118053 2734 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6e12f4a6c9ac662aa2e0683a222f71bd5e36f0878180dd1566d21a5a45184521\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 12 13:28:51.118219 containerd[1500]: time="2025-05-12T13:28:51.118166932Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5599775c9-pcf4c,Uid:2d751f48-caa2-4e41-89a9-f6a5fe02729e,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"691a9b0400f9b5aba982e5b789e52f84f587c3fedf187f6e8b5cc7aacdf9aa18\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 12 13:28:51.118401 kubelet[2734]: E0512 13:28:51.118369 2734 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d71664f53845f60b358deb879130bfb5ec4183e82c243ddea9dbca2087c316bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 12 13:28:51.118428 kubelet[2734]: E0512 13:28:51.118363 2734 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6e12f4a6c9ac662aa2e0683a222f71bd5e36f0878180dd1566d21a5a45184521\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-5xc5z" May 12 13:28:51.118452 kubelet[2734]: E0512 13:28:51.118421 2734 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d71664f53845f60b358deb879130bfb5ec4183e82c243ddea9dbca2087c316bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5599775c9-c8whr" May 12 13:28:51.118452 kubelet[2734]: E0512 13:28:51.118433 2734 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6e12f4a6c9ac662aa2e0683a222f71bd5e36f0878180dd1566d21a5a45184521\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-5xc5z" May 12 13:28:51.118452 kubelet[2734]: E0512 13:28:51.118438 2734 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d71664f53845f60b358deb879130bfb5ec4183e82c243ddea9dbca2087c316bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5599775c9-c8whr" May 12 13:28:51.118520 kubelet[2734]: E0512 13:28:51.118484 2734 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-5xc5z_kube-system(22ec1171-e30f-460d-b307-c9c15268f694)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-5xc5z_kube-system(22ec1171-e30f-460d-b307-c9c15268f694)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6e12f4a6c9ac662aa2e0683a222f71bd5e36f0878180dd1566d21a5a45184521\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-5xc5z" podUID="22ec1171-e30f-460d-b307-c9c15268f694" May 12 13:28:51.118559 kubelet[2734]: E0512 13:28:51.118523 2734 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5599775c9-c8whr_calico-apiserver(223e65e8-dd23-4599-ac1d-826d7fd16ccd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5599775c9-c8whr_calico-apiserver(223e65e8-dd23-4599-ac1d-826d7fd16ccd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d71664f53845f60b358deb879130bfb5ec4183e82c243ddea9dbca2087c316bf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5599775c9-c8whr" podUID="223e65e8-dd23-4599-ac1d-826d7fd16ccd" May 12 13:28:51.118981 kubelet[2734]: E0512 13:28:51.118939 2734 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"691a9b0400f9b5aba982e5b789e52f84f587c3fedf187f6e8b5cc7aacdf9aa18\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 12 13:28:51.119040 containerd[1500]: time="2025-05-12T13:28:51.119004400Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6575dcdf85-mqr66,Uid:c875fed8-1144-4edd-becd-9872677e567b,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3108c10215c3b7b1f457cae5d1e56226e507c661b3bdedaffe0ec9323fc557a2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 12 13:28:51.119407 kubelet[2734]: E0512 13:28:51.119381 2734 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3108c10215c3b7b1f457cae5d1e56226e507c661b3bdedaffe0ec9323fc557a2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 12 13:28:51.119445 kubelet[2734]: E0512 13:28:51.119419 2734 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3108c10215c3b7b1f457cae5d1e56226e507c661b3bdedaffe0ec9323fc557a2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6575dcdf85-mqr66" May 12 13:28:51.119487 kubelet[2734]: E0512 13:28:51.119469 2734 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"691a9b0400f9b5aba982e5b789e52f84f587c3fedf187f6e8b5cc7aacdf9aa18\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5599775c9-pcf4c" May 12 13:28:51.119525 kubelet[2734]: E0512 13:28:51.119490 2734 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"691a9b0400f9b5aba982e5b789e52f84f587c3fedf187f6e8b5cc7aacdf9aa18\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5599775c9-pcf4c" May 12 13:28:51.119563 kubelet[2734]: E0512 13:28:51.119531 2734 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5599775c9-pcf4c_calico-apiserver(2d751f48-caa2-4e41-89a9-f6a5fe02729e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5599775c9-pcf4c_calico-apiserver(2d751f48-caa2-4e41-89a9-f6a5fe02729e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"691a9b0400f9b5aba982e5b789e52f84f587c3fedf187f6e8b5cc7aacdf9aa18\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5599775c9-pcf4c" podUID="2d751f48-caa2-4e41-89a9-f6a5fe02729e" May 12 13:28:51.119605 kubelet[2734]: E0512 13:28:51.119581 2734 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3108c10215c3b7b1f457cae5d1e56226e507c661b3bdedaffe0ec9323fc557a2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6575dcdf85-mqr66" May 12 13:28:51.119627 kubelet[2734]: E0512 13:28:51.119608 2734 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6575dcdf85-mqr66_calico-system(c875fed8-1144-4edd-becd-9872677e567b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6575dcdf85-mqr66_calico-system(c875fed8-1144-4edd-becd-9872677e567b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3108c10215c3b7b1f457cae5d1e56226e507c661b3bdedaffe0ec9323fc557a2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6575dcdf85-mqr66" podUID="c875fed8-1144-4edd-becd-9872677e567b" May 12 13:28:51.119816 containerd[1500]: time="2025-05-12T13:28:51.119737105Z" level=error msg="Failed to destroy network for sandbox \"1c4fe1331796be7c4e14ed884546d56d6d96b9ff53c01728a31b15ea34452ecd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 12 13:28:51.120602 containerd[1500]: time="2025-05-12T13:28:51.120574733Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-7hbtm,Uid:bccbcfb8-e672-441e-9343-2cea5d3395e6,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c4fe1331796be7c4e14ed884546d56d6d96b9ff53c01728a31b15ea34452ecd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 12 13:28:51.120835 kubelet[2734]: E0512 13:28:51.120811 2734 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c4fe1331796be7c4e14ed884546d56d6d96b9ff53c01728a31b15ea34452ecd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 12 13:28:51.120880 kubelet[2734]: E0512 13:28:51.120849 2734 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c4fe1331796be7c4e14ed884546d56d6d96b9ff53c01728a31b15ea34452ecd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-7hbtm" May 12 13:28:51.120880 kubelet[2734]: E0512 13:28:51.120865 2734 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c4fe1331796be7c4e14ed884546d56d6d96b9ff53c01728a31b15ea34452ecd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-7hbtm" May 12 13:28:51.120930 kubelet[2734]: E0512 13:28:51.120896 2734 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-7hbtm_kube-system(bccbcfb8-e672-441e-9343-2cea5d3395e6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-7hbtm_kube-system(bccbcfb8-e672-441e-9343-2cea5d3395e6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1c4fe1331796be7c4e14ed884546d56d6d96b9ff53c01728a31b15ea34452ecd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-7hbtm" podUID="bccbcfb8-e672-441e-9343-2cea5d3395e6" May 12 13:28:51.128026 containerd[1500]: time="2025-05-12T13:28:51.127990984Z" level=error msg="Failed to destroy network for sandbox \"083417b21b0fcd14223b583783a0e22de73b55c5dc65003afca8856438c90589\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 12 13:28:51.129067 containerd[1500]: time="2025-05-12T13:28:51.129024339Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-565df67cbb-w69v8,Uid:0e405ba7-374c-45e2-9ff8-52063ef948ea,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"083417b21b0fcd14223b583783a0e22de73b55c5dc65003afca8856438c90589\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 12 13:28:51.129241 kubelet[2734]: E0512 13:28:51.129203 2734 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"083417b21b0fcd14223b583783a0e22de73b55c5dc65003afca8856438c90589\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 12 13:28:51.129292 kubelet[2734]: E0512 13:28:51.129253 2734 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"083417b21b0fcd14223b583783a0e22de73b55c5dc65003afca8856438c90589\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-565df67cbb-w69v8" May 12 13:28:51.129292 kubelet[2734]: E0512 13:28:51.129271 2734 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"083417b21b0fcd14223b583783a0e22de73b55c5dc65003afca8856438c90589\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-565df67cbb-w69v8" May 12 13:28:51.129335 kubelet[2734]: E0512 13:28:51.129301 2734 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-565df67cbb-w69v8_calico-apiserver(0e405ba7-374c-45e2-9ff8-52063ef948ea)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-565df67cbb-w69v8_calico-apiserver(0e405ba7-374c-45e2-9ff8-52063ef948ea)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"083417b21b0fcd14223b583783a0e22de73b55c5dc65003afca8856438c90589\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-565df67cbb-w69v8" podUID="0e405ba7-374c-45e2-9ff8-52063ef948ea" May 12 13:28:51.381429 containerd[1500]: time="2025-05-12T13:28:51.381304125Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\"" May 12 13:28:52.233569 systemd[1]: Created slice kubepods-besteffort-podb569786a_d324_4ae2_9a75_6a80b38875a0.slice - libcontainer container kubepods-besteffort-podb569786a_d324_4ae2_9a75_6a80b38875a0.slice. May 12 13:28:52.235785 containerd[1500]: time="2025-05-12T13:28:52.235742523Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rdgcp,Uid:b569786a-d324-4ae2-9a75-6a80b38875a0,Namespace:calico-system,Attempt:0,}" May 12 13:28:52.277399 containerd[1500]: time="2025-05-12T13:28:52.277341524Z" level=error msg="Failed to destroy network for sandbox \"a85040605a4ac4eda1fbff98bff540172b251c7219df58188a3f55ff2078ea08\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 12 13:28:52.278400 containerd[1500]: time="2025-05-12T13:28:52.278317556Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rdgcp,Uid:b569786a-d324-4ae2-9a75-6a80b38875a0,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a85040605a4ac4eda1fbff98bff540172b251c7219df58188a3f55ff2078ea08\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 12 13:28:52.278624 kubelet[2734]: E0512 13:28:52.278578 2734 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a85040605a4ac4eda1fbff98bff540172b251c7219df58188a3f55ff2078ea08\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 12 13:28:52.278859 kubelet[2734]: E0512 13:28:52.278637 2734 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a85040605a4ac4eda1fbff98bff540172b251c7219df58188a3f55ff2078ea08\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-rdgcp" May 12 13:28:52.278859 kubelet[2734]: E0512 13:28:52.278656 2734 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a85040605a4ac4eda1fbff98bff540172b251c7219df58188a3f55ff2078ea08\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-rdgcp" May 12 13:28:52.278859 kubelet[2734]: E0512 13:28:52.278721 2734 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-rdgcp_calico-system(b569786a-d324-4ae2-9a75-6a80b38875a0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-rdgcp_calico-system(b569786a-d324-4ae2-9a75-6a80b38875a0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a85040605a4ac4eda1fbff98bff540172b251c7219df58188a3f55ff2078ea08\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-rdgcp" podUID="b569786a-d324-4ae2-9a75-6a80b38875a0" May 12 13:28:52.281141 systemd[1]: run-netns-cni\x2db02936f1\x2debc0\x2dfa9d\x2d6692\x2d00bcc6ec536a.mount: Deactivated successfully. May 12 13:28:55.161543 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1738897625.mount: Deactivated successfully. May 12 13:28:55.382645 containerd[1500]: time="2025-05-12T13:28:55.382589364Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 13:28:55.401251 containerd[1500]: time="2025-05-12T13:28:55.382988536Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.3: active requests=0, bytes read=138981893" May 12 13:28:55.401331 containerd[1500]: time="2025-05-12T13:28:55.383844721Z" level=info msg="ImageCreate event name:\"sha256:cdcce3ec4624a24c28cdc07b0ee29ddf6703628edee7452a3f8a8b4816bfd057\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 13:28:55.401557 containerd[1500]: time="2025-05-12T13:28:55.386156590Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.3\" with image id \"sha256:cdcce3ec4624a24c28cdc07b0ee29ddf6703628edee7452a3f8a8b4816bfd057\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\", size \"138981755\" in 4.004809104s" May 12 13:28:55.401599 containerd[1500]: time="2025-05-12T13:28:55.401554768Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\" returns image reference \"sha256:cdcce3ec4624a24c28cdc07b0ee29ddf6703628edee7452a3f8a8b4816bfd057\"" May 12 13:28:55.401820 containerd[1500]: time="2025-05-12T13:28:55.401798095Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 13:28:55.409660 containerd[1500]: time="2025-05-12T13:28:55.409615128Z" level=info msg="CreateContainer within sandbox \"5f44f93e7c7f2319a8bc38c8a757b191bd41add8371c47b292b6aaff266fc8f9\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" May 12 13:28:55.419048 containerd[1500]: time="2025-05-12T13:28:55.418322787Z" level=info msg="Container 342ed0f9ae0fca3be8e8ab15baf1e548ed1e2599742bb5f02403fa977a4a3241: CDI devices from CRI Config.CDIDevices: []" May 12 13:28:55.428469 containerd[1500]: time="2025-05-12T13:28:55.427847510Z" level=info msg="CreateContainer within sandbox \"5f44f93e7c7f2319a8bc38c8a757b191bd41add8371c47b292b6aaff266fc8f9\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"342ed0f9ae0fca3be8e8ab15baf1e548ed1e2599742bb5f02403fa977a4a3241\"" May 12 13:28:55.429501 containerd[1500]: time="2025-05-12T13:28:55.429464358Z" level=info msg="StartContainer for \"342ed0f9ae0fca3be8e8ab15baf1e548ed1e2599742bb5f02403fa977a4a3241\"" May 12 13:28:55.432588 containerd[1500]: time="2025-05-12T13:28:55.432548770Z" level=info msg="connecting to shim 342ed0f9ae0fca3be8e8ab15baf1e548ed1e2599742bb5f02403fa977a4a3241" address="unix:///run/containerd/s/e524b267c086fee8319c36a4daeb53b3ca400e7151636644eaf6454dc3d019a5" protocol=ttrpc version=3 May 12 13:28:55.456479 systemd[1]: Started cri-containerd-342ed0f9ae0fca3be8e8ab15baf1e548ed1e2599742bb5f02403fa977a4a3241.scope - libcontainer container 342ed0f9ae0fca3be8e8ab15baf1e548ed1e2599742bb5f02403fa977a4a3241. May 12 13:28:55.458005 systemd[1]: Started sshd@8-10.0.0.76:22-10.0.0.1:56898.service - OpenSSH per-connection server daemon (10.0.0.1:56898). May 12 13:28:55.502553 containerd[1500]: time="2025-05-12T13:28:55.501284655Z" level=info msg="StartContainer for \"342ed0f9ae0fca3be8e8ab15baf1e548ed1e2599742bb5f02403fa977a4a3241\" returns successfully" May 12 13:28:55.509202 sshd[3745]: Accepted publickey for core from 10.0.0.1 port 56898 ssh2: RSA SHA256:jEPoW5jmVqQGUqKP3XswdpHQkuwhsPJWJAB8YbEjhZ8 May 12 13:28:55.511056 sshd-session[3745]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 12 13:28:55.522837 systemd-logind[1472]: New session 9 of user core. May 12 13:28:55.537127 systemd[1]: Started session-9.scope - Session 9 of User core. May 12 13:28:55.656998 sshd[3765]: Connection closed by 10.0.0.1 port 56898 May 12 13:28:55.657346 sshd-session[3745]: pam_unix(sshd:session): session closed for user core May 12 13:28:55.661295 systemd[1]: sshd@8-10.0.0.76:22-10.0.0.1:56898.service: Deactivated successfully. May 12 13:28:55.663410 systemd[1]: session-9.scope: Deactivated successfully. May 12 13:28:55.664351 systemd-logind[1472]: Session 9 logged out. Waiting for processes to exit. May 12 13:28:55.665529 systemd-logind[1472]: Removed session 9. May 12 13:28:55.667260 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. May 12 13:28:55.667350 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. May 12 13:28:56.429870 kubelet[2734]: I0512 13:28:56.429805 2734 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-7tzp8" podStartSLOduration=1.666722562 podStartE2EDuration="14.429789417s" podCreationTimestamp="2025-05-12 13:28:42 +0000 UTC" firstStartedPulling="2025-05-12 13:28:42.639639627 +0000 UTC m=+23.492013240" lastFinishedPulling="2025-05-12 13:28:55.402706482 +0000 UTC m=+36.255080095" observedRunningTime="2025-05-12 13:28:56.428786989 +0000 UTC m=+37.281160642" watchObservedRunningTime="2025-05-12 13:28:56.429789417 +0000 UTC m=+37.282163030" May 12 13:28:57.417554 kubelet[2734]: I0512 13:28:57.417520 2734 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 12 13:28:57.834315 kubelet[2734]: I0512 13:28:57.834266 2734 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 12 13:28:58.192990 kernel: bpftool[3975]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set May 12 13:28:58.341677 systemd-networkd[1407]: vxlan.calico: Link UP May 12 13:28:58.341685 systemd-networkd[1407]: vxlan.calico: Gained carrier May 12 13:28:59.906136 systemd-networkd[1407]: vxlan.calico: Gained IPv6LL May 12 13:29:00.254932 kubelet[2734]: I0512 13:29:00.254680 2734 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 12 13:29:00.382991 containerd[1500]: time="2025-05-12T13:29:00.382094850Z" level=info msg="TaskExit event in podsandbox handler container_id:\"342ed0f9ae0fca3be8e8ab15baf1e548ed1e2599742bb5f02403fa977a4a3241\" id:\"b83744831d2d966a3be096fd41f766b198924ae01d6b7d81c37c8b161b9dd0e1\" pid:4058 exited_at:{seconds:1747056540 nanos:381552196}" May 12 13:29:00.471136 containerd[1500]: time="2025-05-12T13:29:00.470943509Z" level=info msg="TaskExit event in podsandbox handler container_id:\"342ed0f9ae0fca3be8e8ab15baf1e548ed1e2599742bb5f02403fa977a4a3241\" id:\"9496c91d365f3e98cf41728d8df1784fd765f40ab0e5d86ddd30a07ffafd4f61\" pid:4084 exited_at:{seconds:1747056540 nanos:468373042}" May 12 13:29:00.671598 systemd[1]: Started sshd@9-10.0.0.76:22-10.0.0.1:56910.service - OpenSSH per-connection server daemon (10.0.0.1:56910). May 12 13:29:00.758327 sshd[4097]: Accepted publickey for core from 10.0.0.1 port 56910 ssh2: RSA SHA256:jEPoW5jmVqQGUqKP3XswdpHQkuwhsPJWJAB8YbEjhZ8 May 12 13:29:00.759839 sshd-session[4097]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 12 13:29:00.764249 systemd-logind[1472]: New session 10 of user core. May 12 13:29:00.775220 systemd[1]: Started session-10.scope - Session 10 of User core. May 12 13:29:00.926420 sshd[4100]: Connection closed by 10.0.0.1 port 56910 May 12 13:29:00.926813 sshd-session[4097]: pam_unix(sshd:session): session closed for user core May 12 13:29:00.938243 systemd-logind[1472]: Session 10 logged out. Waiting for processes to exit. May 12 13:29:00.938542 systemd[1]: sshd@9-10.0.0.76:22-10.0.0.1:56910.service: Deactivated successfully. May 12 13:29:00.941466 systemd[1]: session-10.scope: Deactivated successfully. May 12 13:29:00.944205 systemd[1]: Started sshd@10-10.0.0.76:22-10.0.0.1:56926.service - OpenSSH per-connection server daemon (10.0.0.1:56926). May 12 13:29:00.945139 systemd-logind[1472]: Removed session 10. May 12 13:29:00.994026 sshd[4114]: Accepted publickey for core from 10.0.0.1 port 56926 ssh2: RSA SHA256:jEPoW5jmVqQGUqKP3XswdpHQkuwhsPJWJAB8YbEjhZ8 May 12 13:29:00.995278 sshd-session[4114]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 12 13:29:01.000891 systemd-logind[1472]: New session 11 of user core. May 12 13:29:01.012196 systemd[1]: Started session-11.scope - Session 11 of User core. May 12 13:29:01.183679 sshd[4117]: Connection closed by 10.0.0.1 port 56926 May 12 13:29:01.184152 sshd-session[4114]: pam_unix(sshd:session): session closed for user core May 12 13:29:01.198033 systemd[1]: Started sshd@11-10.0.0.76:22-10.0.0.1:56940.service - OpenSSH per-connection server daemon (10.0.0.1:56940). May 12 13:29:01.198628 systemd[1]: sshd@10-10.0.0.76:22-10.0.0.1:56926.service: Deactivated successfully. May 12 13:29:01.201015 systemd[1]: session-11.scope: Deactivated successfully. May 12 13:29:01.206665 systemd-logind[1472]: Session 11 logged out. Waiting for processes to exit. May 12 13:29:01.210007 systemd-logind[1472]: Removed session 11. May 12 13:29:01.254596 sshd[4126]: Accepted publickey for core from 10.0.0.1 port 56940 ssh2: RSA SHA256:jEPoW5jmVqQGUqKP3XswdpHQkuwhsPJWJAB8YbEjhZ8 May 12 13:29:01.256328 sshd-session[4126]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 12 13:29:01.262332 systemd-logind[1472]: New session 12 of user core. May 12 13:29:01.268619 systemd[1]: Started session-12.scope - Session 12 of User core. May 12 13:29:01.421880 sshd[4134]: Connection closed by 10.0.0.1 port 56940 May 12 13:29:01.422212 sshd-session[4126]: pam_unix(sshd:session): session closed for user core May 12 13:29:01.425600 systemd[1]: sshd@11-10.0.0.76:22-10.0.0.1:56940.service: Deactivated successfully. May 12 13:29:01.427378 systemd[1]: session-12.scope: Deactivated successfully. May 12 13:29:01.428305 systemd-logind[1472]: Session 12 logged out. Waiting for processes to exit. May 12 13:29:01.430182 systemd-logind[1472]: Removed session 12. May 12 13:29:02.229664 containerd[1500]: time="2025-05-12T13:29:02.229382540Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-7hbtm,Uid:bccbcfb8-e672-441e-9343-2cea5d3395e6,Namespace:kube-system,Attempt:0,}" May 12 13:29:02.443780 systemd-networkd[1407]: calie7a2ebed8ef: Link UP May 12 13:29:02.444689 systemd-networkd[1407]: calie7a2ebed8ef: Gained carrier May 12 13:29:02.460501 containerd[1500]: 2025-05-12 13:29:02.313 [INFO][4148] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7db6d8ff4d--7hbtm-eth0 coredns-7db6d8ff4d- kube-system bccbcfb8-e672-441e-9343-2cea5d3395e6 726 0 2025-05-12 13:28:36 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7db6d8ff4d-7hbtm eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calie7a2ebed8ef [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="49a90e3dfd4b82caf2c7445e40cf7fcaaa39b4c4db7302ddb2d15d35e82392a4" Namespace="kube-system" Pod="coredns-7db6d8ff4d-7hbtm" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--7hbtm-" May 12 13:29:02.460501 containerd[1500]: 2025-05-12 13:29:02.313 [INFO][4148] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="49a90e3dfd4b82caf2c7445e40cf7fcaaa39b4c4db7302ddb2d15d35e82392a4" Namespace="kube-system" Pod="coredns-7db6d8ff4d-7hbtm" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--7hbtm-eth0" May 12 13:29:02.460501 containerd[1500]: 2025-05-12 13:29:02.397 [INFO][4162] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="49a90e3dfd4b82caf2c7445e40cf7fcaaa39b4c4db7302ddb2d15d35e82392a4" HandleID="k8s-pod-network.49a90e3dfd4b82caf2c7445e40cf7fcaaa39b4c4db7302ddb2d15d35e82392a4" Workload="localhost-k8s-coredns--7db6d8ff4d--7hbtm-eth0" May 12 13:29:02.460750 containerd[1500]: 2025-05-12 13:29:02.409 [INFO][4162] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="49a90e3dfd4b82caf2c7445e40cf7fcaaa39b4c4db7302ddb2d15d35e82392a4" HandleID="k8s-pod-network.49a90e3dfd4b82caf2c7445e40cf7fcaaa39b4c4db7302ddb2d15d35e82392a4" Workload="localhost-k8s-coredns--7db6d8ff4d--7hbtm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d8d50), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7db6d8ff4d-7hbtm", "timestamp":"2025-05-12 13:29:02.397170876 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 12 13:29:02.460750 containerd[1500]: 2025-05-12 13:29:02.409 [INFO][4162] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 12 13:29:02.460750 containerd[1500]: 2025-05-12 13:29:02.409 [INFO][4162] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 12 13:29:02.460750 containerd[1500]: 2025-05-12 13:29:02.409 [INFO][4162] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 12 13:29:02.460750 containerd[1500]: 2025-05-12 13:29:02.411 [INFO][4162] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.49a90e3dfd4b82caf2c7445e40cf7fcaaa39b4c4db7302ddb2d15d35e82392a4" host="localhost" May 12 13:29:02.460750 containerd[1500]: 2025-05-12 13:29:02.416 [INFO][4162] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 12 13:29:02.460750 containerd[1500]: 2025-05-12 13:29:02.420 [INFO][4162] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 12 13:29:02.460750 containerd[1500]: 2025-05-12 13:29:02.421 [INFO][4162] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 12 13:29:02.460750 containerd[1500]: 2025-05-12 13:29:02.423 [INFO][4162] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 12 13:29:02.460750 containerd[1500]: 2025-05-12 13:29:02.423 [INFO][4162] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.49a90e3dfd4b82caf2c7445e40cf7fcaaa39b4c4db7302ddb2d15d35e82392a4" host="localhost" May 12 13:29:02.460984 containerd[1500]: 2025-05-12 13:29:02.425 [INFO][4162] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.49a90e3dfd4b82caf2c7445e40cf7fcaaa39b4c4db7302ddb2d15d35e82392a4 May 12 13:29:02.460984 containerd[1500]: 2025-05-12 13:29:02.428 [INFO][4162] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.49a90e3dfd4b82caf2c7445e40cf7fcaaa39b4c4db7302ddb2d15d35e82392a4" host="localhost" May 12 13:29:02.460984 containerd[1500]: 2025-05-12 13:29:02.435 [INFO][4162] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.49a90e3dfd4b82caf2c7445e40cf7fcaaa39b4c4db7302ddb2d15d35e82392a4" host="localhost" May 12 13:29:02.460984 containerd[1500]: 2025-05-12 13:29:02.435 [INFO][4162] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.49a90e3dfd4b82caf2c7445e40cf7fcaaa39b4c4db7302ddb2d15d35e82392a4" host="localhost" May 12 13:29:02.460984 containerd[1500]: 2025-05-12 13:29:02.435 [INFO][4162] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 12 13:29:02.460984 containerd[1500]: 2025-05-12 13:29:02.435 [INFO][4162] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="49a90e3dfd4b82caf2c7445e40cf7fcaaa39b4c4db7302ddb2d15d35e82392a4" HandleID="k8s-pod-network.49a90e3dfd4b82caf2c7445e40cf7fcaaa39b4c4db7302ddb2d15d35e82392a4" Workload="localhost-k8s-coredns--7db6d8ff4d--7hbtm-eth0" May 12 13:29:02.461116 containerd[1500]: 2025-05-12 13:29:02.439 [INFO][4148] cni-plugin/k8s.go 386: Populated endpoint ContainerID="49a90e3dfd4b82caf2c7445e40cf7fcaaa39b4c4db7302ddb2d15d35e82392a4" Namespace="kube-system" Pod="coredns-7db6d8ff4d-7hbtm" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--7hbtm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--7hbtm-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"bccbcfb8-e672-441e-9343-2cea5d3395e6", ResourceVersion:"726", Generation:0, CreationTimestamp:time.Date(2025, time.May, 12, 13, 28, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7db6d8ff4d-7hbtm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie7a2ebed8ef", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 12 13:29:02.461172 containerd[1500]: 2025-05-12 13:29:02.439 [INFO][4148] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="49a90e3dfd4b82caf2c7445e40cf7fcaaa39b4c4db7302ddb2d15d35e82392a4" Namespace="kube-system" Pod="coredns-7db6d8ff4d-7hbtm" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--7hbtm-eth0" May 12 13:29:02.461172 containerd[1500]: 2025-05-12 13:29:02.439 [INFO][4148] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie7a2ebed8ef ContainerID="49a90e3dfd4b82caf2c7445e40cf7fcaaa39b4c4db7302ddb2d15d35e82392a4" Namespace="kube-system" Pod="coredns-7db6d8ff4d-7hbtm" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--7hbtm-eth0" May 12 13:29:02.461172 containerd[1500]: 2025-05-12 13:29:02.445 [INFO][4148] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="49a90e3dfd4b82caf2c7445e40cf7fcaaa39b4c4db7302ddb2d15d35e82392a4" Namespace="kube-system" Pod="coredns-7db6d8ff4d-7hbtm" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--7hbtm-eth0" May 12 13:29:02.461231 containerd[1500]: 2025-05-12 13:29:02.445 [INFO][4148] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="49a90e3dfd4b82caf2c7445e40cf7fcaaa39b4c4db7302ddb2d15d35e82392a4" Namespace="kube-system" Pod="coredns-7db6d8ff4d-7hbtm" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--7hbtm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--7hbtm-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"bccbcfb8-e672-441e-9343-2cea5d3395e6", ResourceVersion:"726", Generation:0, CreationTimestamp:time.Date(2025, time.May, 12, 13, 28, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"49a90e3dfd4b82caf2c7445e40cf7fcaaa39b4c4db7302ddb2d15d35e82392a4", Pod:"coredns-7db6d8ff4d-7hbtm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie7a2ebed8ef", MAC:"72:dc:e1:a6:f5:30", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 12 13:29:02.461231 containerd[1500]: 2025-05-12 13:29:02.457 [INFO][4148] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="49a90e3dfd4b82caf2c7445e40cf7fcaaa39b4c4db7302ddb2d15d35e82392a4" Namespace="kube-system" Pod="coredns-7db6d8ff4d-7hbtm" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--7hbtm-eth0" May 12 13:29:02.544274 containerd[1500]: time="2025-05-12T13:29:02.544160779Z" level=info msg="connecting to shim 49a90e3dfd4b82caf2c7445e40cf7fcaaa39b4c4db7302ddb2d15d35e82392a4" address="unix:///run/containerd/s/f2767821cedfea0978bb7672cfc4f9423e8ddf01434d55e0cb650d4557549c1b" namespace=k8s.io protocol=ttrpc version=3 May 12 13:29:02.569693 systemd[1]: Started cri-containerd-49a90e3dfd4b82caf2c7445e40cf7fcaaa39b4c4db7302ddb2d15d35e82392a4.scope - libcontainer container 49a90e3dfd4b82caf2c7445e40cf7fcaaa39b4c4db7302ddb2d15d35e82392a4. May 12 13:29:02.626127 systemd-resolved[1408]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 12 13:29:02.646907 containerd[1500]: time="2025-05-12T13:29:02.646864630Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-7hbtm,Uid:bccbcfb8-e672-441e-9343-2cea5d3395e6,Namespace:kube-system,Attempt:0,} returns sandbox id \"49a90e3dfd4b82caf2c7445e40cf7fcaaa39b4c4db7302ddb2d15d35e82392a4\"" May 12 13:29:02.653681 containerd[1500]: time="2025-05-12T13:29:02.653648238Z" level=info msg="CreateContainer within sandbox \"49a90e3dfd4b82caf2c7445e40cf7fcaaa39b4c4db7302ddb2d15d35e82392a4\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 12 13:29:02.664070 containerd[1500]: time="2025-05-12T13:29:02.664025253Z" level=info msg="Container 3b2944ca4dd351ffa86a4dbaa348d8489bfe90b3bfbf580a551adc452366a0af: CDI devices from CRI Config.CDIDevices: []" May 12 13:29:02.666267 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount381203392.mount: Deactivated successfully. May 12 13:29:02.670244 containerd[1500]: time="2025-05-12T13:29:02.670210126Z" level=info msg="CreateContainer within sandbox \"49a90e3dfd4b82caf2c7445e40cf7fcaaa39b4c4db7302ddb2d15d35e82392a4\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3b2944ca4dd351ffa86a4dbaa348d8489bfe90b3bfbf580a551adc452366a0af\"" May 12 13:29:02.670962 containerd[1500]: time="2025-05-12T13:29:02.670737219Z" level=info msg="StartContainer for \"3b2944ca4dd351ffa86a4dbaa348d8489bfe90b3bfbf580a551adc452366a0af\"" May 12 13:29:02.671512 containerd[1500]: time="2025-05-12T13:29:02.671475797Z" level=info msg="connecting to shim 3b2944ca4dd351ffa86a4dbaa348d8489bfe90b3bfbf580a551adc452366a0af" address="unix:///run/containerd/s/f2767821cedfea0978bb7672cfc4f9423e8ddf01434d55e0cb650d4557549c1b" protocol=ttrpc version=3 May 12 13:29:02.690111 systemd[1]: Started cri-containerd-3b2944ca4dd351ffa86a4dbaa348d8489bfe90b3bfbf580a551adc452366a0af.scope - libcontainer container 3b2944ca4dd351ffa86a4dbaa348d8489bfe90b3bfbf580a551adc452366a0af. May 12 13:29:02.721338 containerd[1500]: time="2025-05-12T13:29:02.721305545Z" level=info msg="StartContainer for \"3b2944ca4dd351ffa86a4dbaa348d8489bfe90b3bfbf580a551adc452366a0af\" returns successfully" May 12 13:29:03.467617 kubelet[2734]: I0512 13:29:03.466763 2734 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-7hbtm" podStartSLOduration=27.4667439 podStartE2EDuration="27.4667439s" podCreationTimestamp="2025-05-12 13:28:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-12 13:29:03.453723946 +0000 UTC m=+44.306097559" watchObservedRunningTime="2025-05-12 13:29:03.4667439 +0000 UTC m=+44.319117513" May 12 13:29:04.066296 systemd-networkd[1407]: calie7a2ebed8ef: Gained IPv6LL May 12 13:29:04.229505 containerd[1500]: time="2025-05-12T13:29:04.229456235Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rdgcp,Uid:b569786a-d324-4ae2-9a75-6a80b38875a0,Namespace:calico-system,Attempt:0,}" May 12 13:29:04.229897 containerd[1500]: time="2025-05-12T13:29:04.229466075Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6575dcdf85-mqr66,Uid:c875fed8-1144-4edd-becd-9872677e567b,Namespace:calico-system,Attempt:0,}" May 12 13:29:04.358203 systemd-networkd[1407]: calie53a5149e76: Link UP May 12 13:29:04.358500 systemd-networkd[1407]: calie53a5149e76: Gained carrier May 12 13:29:04.372555 containerd[1500]: 2025-05-12 13:29:04.274 [INFO][4280] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--rdgcp-eth0 csi-node-driver- calico-system b569786a-d324-4ae2-9a75-6a80b38875a0 603 0 2025-05-12 13:28:42 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:55b7b4b9d k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-rdgcp eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calie53a5149e76 [] []}} ContainerID="eecde385363eba38527ccd001bc90f36458087c2c5d960f73a3cbd9ac430d33e" Namespace="calico-system" Pod="csi-node-driver-rdgcp" WorkloadEndpoint="localhost-k8s-csi--node--driver--rdgcp-" May 12 13:29:04.372555 containerd[1500]: 2025-05-12 13:29:04.274 [INFO][4280] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="eecde385363eba38527ccd001bc90f36458087c2c5d960f73a3cbd9ac430d33e" Namespace="calico-system" Pod="csi-node-driver-rdgcp" WorkloadEndpoint="localhost-k8s-csi--node--driver--rdgcp-eth0" May 12 13:29:04.372555 containerd[1500]: 2025-05-12 13:29:04.311 [INFO][4305] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="eecde385363eba38527ccd001bc90f36458087c2c5d960f73a3cbd9ac430d33e" HandleID="k8s-pod-network.eecde385363eba38527ccd001bc90f36458087c2c5d960f73a3cbd9ac430d33e" Workload="localhost-k8s-csi--node--driver--rdgcp-eth0" May 12 13:29:04.372555 containerd[1500]: 2025-05-12 13:29:04.321 [INFO][4305] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="eecde385363eba38527ccd001bc90f36458087c2c5d960f73a3cbd9ac430d33e" HandleID="k8s-pod-network.eecde385363eba38527ccd001bc90f36458087c2c5d960f73a3cbd9ac430d33e" Workload="localhost-k8s-csi--node--driver--rdgcp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d96d0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-rdgcp", "timestamp":"2025-05-12 13:29:04.310995117 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 12 13:29:04.372555 containerd[1500]: 2025-05-12 13:29:04.321 [INFO][4305] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 12 13:29:04.372555 containerd[1500]: 2025-05-12 13:29:04.321 [INFO][4305] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 12 13:29:04.372555 containerd[1500]: 2025-05-12 13:29:04.321 [INFO][4305] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 12 13:29:04.372555 containerd[1500]: 2025-05-12 13:29:04.323 [INFO][4305] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.eecde385363eba38527ccd001bc90f36458087c2c5d960f73a3cbd9ac430d33e" host="localhost" May 12 13:29:04.372555 containerd[1500]: 2025-05-12 13:29:04.334 [INFO][4305] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 12 13:29:04.372555 containerd[1500]: 2025-05-12 13:29:04.338 [INFO][4305] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 12 13:29:04.372555 containerd[1500]: 2025-05-12 13:29:04.340 [INFO][4305] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 12 13:29:04.372555 containerd[1500]: 2025-05-12 13:29:04.341 [INFO][4305] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 12 13:29:04.372555 containerd[1500]: 2025-05-12 13:29:04.342 [INFO][4305] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.eecde385363eba38527ccd001bc90f36458087c2c5d960f73a3cbd9ac430d33e" host="localhost" May 12 13:29:04.372555 containerd[1500]: 2025-05-12 13:29:04.343 [INFO][4305] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.eecde385363eba38527ccd001bc90f36458087c2c5d960f73a3cbd9ac430d33e May 12 13:29:04.372555 containerd[1500]: 2025-05-12 13:29:04.346 [INFO][4305] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.eecde385363eba38527ccd001bc90f36458087c2c5d960f73a3cbd9ac430d33e" host="localhost" May 12 13:29:04.372555 containerd[1500]: 2025-05-12 13:29:04.352 [INFO][4305] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.eecde385363eba38527ccd001bc90f36458087c2c5d960f73a3cbd9ac430d33e" host="localhost" May 12 13:29:04.372555 containerd[1500]: 2025-05-12 13:29:04.352 [INFO][4305] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.eecde385363eba38527ccd001bc90f36458087c2c5d960f73a3cbd9ac430d33e" host="localhost" May 12 13:29:04.372555 containerd[1500]: 2025-05-12 13:29:04.352 [INFO][4305] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 12 13:29:04.372555 containerd[1500]: 2025-05-12 13:29:04.352 [INFO][4305] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="eecde385363eba38527ccd001bc90f36458087c2c5d960f73a3cbd9ac430d33e" HandleID="k8s-pod-network.eecde385363eba38527ccd001bc90f36458087c2c5d960f73a3cbd9ac430d33e" Workload="localhost-k8s-csi--node--driver--rdgcp-eth0" May 12 13:29:04.373156 containerd[1500]: 2025-05-12 13:29:04.355 [INFO][4280] cni-plugin/k8s.go 386: Populated endpoint ContainerID="eecde385363eba38527ccd001bc90f36458087c2c5d960f73a3cbd9ac430d33e" Namespace="calico-system" Pod="csi-node-driver-rdgcp" WorkloadEndpoint="localhost-k8s-csi--node--driver--rdgcp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--rdgcp-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b569786a-d324-4ae2-9a75-6a80b38875a0", ResourceVersion:"603", Generation:0, CreationTimestamp:time.Date(2025, time.May, 12, 13, 28, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-rdgcp", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie53a5149e76", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 12 13:29:04.373156 containerd[1500]: 2025-05-12 13:29:04.355 [INFO][4280] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="eecde385363eba38527ccd001bc90f36458087c2c5d960f73a3cbd9ac430d33e" Namespace="calico-system" Pod="csi-node-driver-rdgcp" WorkloadEndpoint="localhost-k8s-csi--node--driver--rdgcp-eth0" May 12 13:29:04.373156 containerd[1500]: 2025-05-12 13:29:04.355 [INFO][4280] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie53a5149e76 ContainerID="eecde385363eba38527ccd001bc90f36458087c2c5d960f73a3cbd9ac430d33e" Namespace="calico-system" Pod="csi-node-driver-rdgcp" WorkloadEndpoint="localhost-k8s-csi--node--driver--rdgcp-eth0" May 12 13:29:04.373156 containerd[1500]: 2025-05-12 13:29:04.358 [INFO][4280] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="eecde385363eba38527ccd001bc90f36458087c2c5d960f73a3cbd9ac430d33e" Namespace="calico-system" Pod="csi-node-driver-rdgcp" WorkloadEndpoint="localhost-k8s-csi--node--driver--rdgcp-eth0" May 12 13:29:04.373156 containerd[1500]: 2025-05-12 13:29:04.359 [INFO][4280] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="eecde385363eba38527ccd001bc90f36458087c2c5d960f73a3cbd9ac430d33e" Namespace="calico-system" Pod="csi-node-driver-rdgcp" WorkloadEndpoint="localhost-k8s-csi--node--driver--rdgcp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--rdgcp-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b569786a-d324-4ae2-9a75-6a80b38875a0", ResourceVersion:"603", Generation:0, CreationTimestamp:time.Date(2025, time.May, 12, 13, 28, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"eecde385363eba38527ccd001bc90f36458087c2c5d960f73a3cbd9ac430d33e", Pod:"csi-node-driver-rdgcp", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie53a5149e76", MAC:"2e:23:27:8c:32:5f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 12 13:29:04.373156 containerd[1500]: 2025-05-12 13:29:04.369 [INFO][4280] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="eecde385363eba38527ccd001bc90f36458087c2c5d960f73a3cbd9ac430d33e" Namespace="calico-system" Pod="csi-node-driver-rdgcp" WorkloadEndpoint="localhost-k8s-csi--node--driver--rdgcp-eth0" May 12 13:29:04.393324 systemd-networkd[1407]: calidb40e5bc30b: Link UP May 12 13:29:04.393626 systemd-networkd[1407]: calidb40e5bc30b: Gained carrier May 12 13:29:04.409336 containerd[1500]: time="2025-05-12T13:29:04.409191871Z" level=info msg="connecting to shim eecde385363eba38527ccd001bc90f36458087c2c5d960f73a3cbd9ac430d33e" address="unix:///run/containerd/s/2ba4cd4f511d011b36cb0517af1dd11d826fb8bda00b6299353ba3bd3d8026b9" namespace=k8s.io protocol=ttrpc version=3 May 12 13:29:04.410232 containerd[1500]: 2025-05-12 13:29:04.286 [INFO][4289] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--6575dcdf85--mqr66-eth0 calico-kube-controllers-6575dcdf85- calico-system c875fed8-1144-4edd-becd-9872677e567b 735 0 2025-05-12 13:28:42 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6575dcdf85 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-6575dcdf85-mqr66 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calidb40e5bc30b [] []}} ContainerID="33b7b13c1f42b4db530ef006755530f39cd5e74720f5b96c5d656070585a20f7" Namespace="calico-system" Pod="calico-kube-controllers-6575dcdf85-mqr66" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6575dcdf85--mqr66-" May 12 13:29:04.410232 containerd[1500]: 2025-05-12 13:29:04.287 [INFO][4289] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="33b7b13c1f42b4db530ef006755530f39cd5e74720f5b96c5d656070585a20f7" Namespace="calico-system" Pod="calico-kube-controllers-6575dcdf85-mqr66" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6575dcdf85--mqr66-eth0" May 12 13:29:04.410232 containerd[1500]: 2025-05-12 13:29:04.324 [INFO][4311] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="33b7b13c1f42b4db530ef006755530f39cd5e74720f5b96c5d656070585a20f7" HandleID="k8s-pod-network.33b7b13c1f42b4db530ef006755530f39cd5e74720f5b96c5d656070585a20f7" Workload="localhost-k8s-calico--kube--controllers--6575dcdf85--mqr66-eth0" May 12 13:29:04.410232 containerd[1500]: 2025-05-12 13:29:04.336 [INFO][4311] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="33b7b13c1f42b4db530ef006755530f39cd5e74720f5b96c5d656070585a20f7" HandleID="k8s-pod-network.33b7b13c1f42b4db530ef006755530f39cd5e74720f5b96c5d656070585a20f7" Workload="localhost-k8s-calico--kube--controllers--6575dcdf85--mqr66-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d8b30), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-6575dcdf85-mqr66", "timestamp":"2025-05-12 13:29:04.324082145 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 12 13:29:04.410232 containerd[1500]: 2025-05-12 13:29:04.336 [INFO][4311] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 12 13:29:04.410232 containerd[1500]: 2025-05-12 13:29:04.352 [INFO][4311] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 12 13:29:04.410232 containerd[1500]: 2025-05-12 13:29:04.352 [INFO][4311] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 12 13:29:04.410232 containerd[1500]: 2025-05-12 13:29:04.355 [INFO][4311] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.33b7b13c1f42b4db530ef006755530f39cd5e74720f5b96c5d656070585a20f7" host="localhost" May 12 13:29:04.410232 containerd[1500]: 2025-05-12 13:29:04.361 [INFO][4311] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 12 13:29:04.410232 containerd[1500]: 2025-05-12 13:29:04.368 [INFO][4311] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 12 13:29:04.410232 containerd[1500]: 2025-05-12 13:29:04.373 [INFO][4311] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 12 13:29:04.410232 containerd[1500]: 2025-05-12 13:29:04.376 [INFO][4311] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 12 13:29:04.410232 containerd[1500]: 2025-05-12 13:29:04.376 [INFO][4311] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.33b7b13c1f42b4db530ef006755530f39cd5e74720f5b96c5d656070585a20f7" host="localhost" May 12 13:29:04.410232 containerd[1500]: 2025-05-12 13:29:04.380 [INFO][4311] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.33b7b13c1f42b4db530ef006755530f39cd5e74720f5b96c5d656070585a20f7 May 12 13:29:04.410232 containerd[1500]: 2025-05-12 13:29:04.384 [INFO][4311] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.33b7b13c1f42b4db530ef006755530f39cd5e74720f5b96c5d656070585a20f7" host="localhost" May 12 13:29:04.410232 containerd[1500]: 2025-05-12 13:29:04.388 [INFO][4311] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.33b7b13c1f42b4db530ef006755530f39cd5e74720f5b96c5d656070585a20f7" host="localhost" May 12 13:29:04.410232 containerd[1500]: 2025-05-12 13:29:04.388 [INFO][4311] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.33b7b13c1f42b4db530ef006755530f39cd5e74720f5b96c5d656070585a20f7" host="localhost" May 12 13:29:04.410232 containerd[1500]: 2025-05-12 13:29:04.388 [INFO][4311] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 12 13:29:04.410232 containerd[1500]: 2025-05-12 13:29:04.388 [INFO][4311] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="33b7b13c1f42b4db530ef006755530f39cd5e74720f5b96c5d656070585a20f7" HandleID="k8s-pod-network.33b7b13c1f42b4db530ef006755530f39cd5e74720f5b96c5d656070585a20f7" Workload="localhost-k8s-calico--kube--controllers--6575dcdf85--mqr66-eth0" May 12 13:29:04.411335 containerd[1500]: 2025-05-12 13:29:04.390 [INFO][4289] cni-plugin/k8s.go 386: Populated endpoint ContainerID="33b7b13c1f42b4db530ef006755530f39cd5e74720f5b96c5d656070585a20f7" Namespace="calico-system" Pod="calico-kube-controllers-6575dcdf85-mqr66" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6575dcdf85--mqr66-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6575dcdf85--mqr66-eth0", GenerateName:"calico-kube-controllers-6575dcdf85-", Namespace:"calico-system", SelfLink:"", UID:"c875fed8-1144-4edd-becd-9872677e567b", ResourceVersion:"735", Generation:0, CreationTimestamp:time.Date(2025, time.May, 12, 13, 28, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6575dcdf85", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-6575dcdf85-mqr66", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calidb40e5bc30b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 12 13:29:04.411335 containerd[1500]: 2025-05-12 13:29:04.391 [INFO][4289] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="33b7b13c1f42b4db530ef006755530f39cd5e74720f5b96c5d656070585a20f7" Namespace="calico-system" Pod="calico-kube-controllers-6575dcdf85-mqr66" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6575dcdf85--mqr66-eth0" May 12 13:29:04.411335 containerd[1500]: 2025-05-12 13:29:04.391 [INFO][4289] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidb40e5bc30b ContainerID="33b7b13c1f42b4db530ef006755530f39cd5e74720f5b96c5d656070585a20f7" Namespace="calico-system" Pod="calico-kube-controllers-6575dcdf85-mqr66" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6575dcdf85--mqr66-eth0" May 12 13:29:04.411335 containerd[1500]: 2025-05-12 13:29:04.393 [INFO][4289] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="33b7b13c1f42b4db530ef006755530f39cd5e74720f5b96c5d656070585a20f7" Namespace="calico-system" Pod="calico-kube-controllers-6575dcdf85-mqr66" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6575dcdf85--mqr66-eth0" May 12 13:29:04.411335 containerd[1500]: 2025-05-12 13:29:04.394 [INFO][4289] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="33b7b13c1f42b4db530ef006755530f39cd5e74720f5b96c5d656070585a20f7" Namespace="calico-system" Pod="calico-kube-controllers-6575dcdf85-mqr66" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6575dcdf85--mqr66-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6575dcdf85--mqr66-eth0", GenerateName:"calico-kube-controllers-6575dcdf85-", Namespace:"calico-system", SelfLink:"", UID:"c875fed8-1144-4edd-becd-9872677e567b", ResourceVersion:"735", Generation:0, CreationTimestamp:time.Date(2025, time.May, 12, 13, 28, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6575dcdf85", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"33b7b13c1f42b4db530ef006755530f39cd5e74720f5b96c5d656070585a20f7", Pod:"calico-kube-controllers-6575dcdf85-mqr66", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calidb40e5bc30b", MAC:"7e:aa:6e:94:30:fa", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 12 13:29:04.411335 containerd[1500]: 2025-05-12 13:29:04.406 [INFO][4289] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="33b7b13c1f42b4db530ef006755530f39cd5e74720f5b96c5d656070585a20f7" Namespace="calico-system" Pod="calico-kube-controllers-6575dcdf85-mqr66" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6575dcdf85--mqr66-eth0" May 12 13:29:04.436993 containerd[1500]: time="2025-05-12T13:29:04.436417753Z" level=info msg="connecting to shim 33b7b13c1f42b4db530ef006755530f39cd5e74720f5b96c5d656070585a20f7" address="unix:///run/containerd/s/f1c8bb21ae525d0c72b369b46322b56635d92ef8406955f9263c89d6e2c59a76" namespace=k8s.io protocol=ttrpc version=3 May 12 13:29:04.441272 systemd[1]: Started cri-containerd-eecde385363eba38527ccd001bc90f36458087c2c5d960f73a3cbd9ac430d33e.scope - libcontainer container eecde385363eba38527ccd001bc90f36458087c2c5d960f73a3cbd9ac430d33e. May 12 13:29:04.461336 systemd-resolved[1408]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 12 13:29:04.466224 systemd[1]: Started cri-containerd-33b7b13c1f42b4db530ef006755530f39cd5e74720f5b96c5d656070585a20f7.scope - libcontainer container 33b7b13c1f42b4db530ef006755530f39cd5e74720f5b96c5d656070585a20f7. May 12 13:29:04.481593 systemd-resolved[1408]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 12 13:29:04.488860 containerd[1500]: time="2025-05-12T13:29:04.488815268Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rdgcp,Uid:b569786a-d324-4ae2-9a75-6a80b38875a0,Namespace:calico-system,Attempt:0,} returns sandbox id \"eecde385363eba38527ccd001bc90f36458087c2c5d960f73a3cbd9ac430d33e\"" May 12 13:29:04.490157 containerd[1500]: time="2025-05-12T13:29:04.490129499Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\"" May 12 13:29:04.504775 containerd[1500]: time="2025-05-12T13:29:04.504721643Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6575dcdf85-mqr66,Uid:c875fed8-1144-4edd-becd-9872677e567b,Namespace:calico-system,Attempt:0,} returns sandbox id \"33b7b13c1f42b4db530ef006755530f39cd5e74720f5b96c5d656070585a20f7\"" May 12 13:29:05.230010 containerd[1500]: time="2025-05-12T13:29:05.229704818Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-5xc5z,Uid:22ec1171-e30f-460d-b307-c9c15268f694,Namespace:kube-system,Attempt:0,}" May 12 13:29:05.231537 containerd[1500]: time="2025-05-12T13:29:05.230842444Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-565df67cbb-w69v8,Uid:0e405ba7-374c-45e2-9ff8-52063ef948ea,Namespace:calico-apiserver,Attempt:0,}" May 12 13:29:05.233672 containerd[1500]: time="2025-05-12T13:29:05.233519546Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5599775c9-pcf4c,Uid:2d751f48-caa2-4e41-89a9-f6a5fe02729e,Namespace:calico-apiserver,Attempt:0,}" May 12 13:29:05.363976 systemd-networkd[1407]: cali38e3c20a381: Link UP May 12 13:29:05.364545 systemd-networkd[1407]: cali38e3c20a381: Gained carrier May 12 13:29:05.385536 containerd[1500]: 2025-05-12 13:29:05.284 [INFO][4456] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--565df67cbb--w69v8-eth0 calico-apiserver-565df67cbb- calico-apiserver 0e405ba7-374c-45e2-9ff8-52063ef948ea 736 0 2025-05-12 13:28:42 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:565df67cbb projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-565df67cbb-w69v8 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali38e3c20a381 [] []}} ContainerID="4a24f84fbef38ee1d1eacaf9ad87480f294281125900502e59077fd19e21c19c" Namespace="calico-apiserver" Pod="calico-apiserver-565df67cbb-w69v8" WorkloadEndpoint="localhost-k8s-calico--apiserver--565df67cbb--w69v8-" May 12 13:29:05.385536 containerd[1500]: 2025-05-12 13:29:05.284 [INFO][4456] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="4a24f84fbef38ee1d1eacaf9ad87480f294281125900502e59077fd19e21c19c" Namespace="calico-apiserver" Pod="calico-apiserver-565df67cbb-w69v8" WorkloadEndpoint="localhost-k8s-calico--apiserver--565df67cbb--w69v8-eth0" May 12 13:29:05.385536 containerd[1500]: 2025-05-12 13:29:05.316 [INFO][4489] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4a24f84fbef38ee1d1eacaf9ad87480f294281125900502e59077fd19e21c19c" HandleID="k8s-pod-network.4a24f84fbef38ee1d1eacaf9ad87480f294281125900502e59077fd19e21c19c" Workload="localhost-k8s-calico--apiserver--565df67cbb--w69v8-eth0" May 12 13:29:05.385536 containerd[1500]: 2025-05-12 13:29:05.328 [INFO][4489] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4a24f84fbef38ee1d1eacaf9ad87480f294281125900502e59077fd19e21c19c" HandleID="k8s-pod-network.4a24f84fbef38ee1d1eacaf9ad87480f294281125900502e59077fd19e21c19c" Workload="localhost-k8s-calico--apiserver--565df67cbb--w69v8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002f2040), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-565df67cbb-w69v8", "timestamp":"2025-05-12 13:29:05.316137092 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 12 13:29:05.385536 containerd[1500]: 2025-05-12 13:29:05.328 [INFO][4489] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 12 13:29:05.385536 containerd[1500]: 2025-05-12 13:29:05.328 [INFO][4489] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 12 13:29:05.385536 containerd[1500]: 2025-05-12 13:29:05.328 [INFO][4489] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 12 13:29:05.385536 containerd[1500]: 2025-05-12 13:29:05.330 [INFO][4489] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.4a24f84fbef38ee1d1eacaf9ad87480f294281125900502e59077fd19e21c19c" host="localhost" May 12 13:29:05.385536 containerd[1500]: 2025-05-12 13:29:05.334 [INFO][4489] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 12 13:29:05.385536 containerd[1500]: 2025-05-12 13:29:05.338 [INFO][4489] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 12 13:29:05.385536 containerd[1500]: 2025-05-12 13:29:05.340 [INFO][4489] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 12 13:29:05.385536 containerd[1500]: 2025-05-12 13:29:05.342 [INFO][4489] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 12 13:29:05.385536 containerd[1500]: 2025-05-12 13:29:05.342 [INFO][4489] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4a24f84fbef38ee1d1eacaf9ad87480f294281125900502e59077fd19e21c19c" host="localhost" May 12 13:29:05.385536 containerd[1500]: 2025-05-12 13:29:05.343 [INFO][4489] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.4a24f84fbef38ee1d1eacaf9ad87480f294281125900502e59077fd19e21c19c May 12 13:29:05.385536 containerd[1500]: 2025-05-12 13:29:05.346 [INFO][4489] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4a24f84fbef38ee1d1eacaf9ad87480f294281125900502e59077fd19e21c19c" host="localhost" May 12 13:29:05.385536 containerd[1500]: 2025-05-12 13:29:05.358 [INFO][4489] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.4a24f84fbef38ee1d1eacaf9ad87480f294281125900502e59077fd19e21c19c" host="localhost" May 12 13:29:05.385536 containerd[1500]: 2025-05-12 13:29:05.358 [INFO][4489] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.4a24f84fbef38ee1d1eacaf9ad87480f294281125900502e59077fd19e21c19c" host="localhost" May 12 13:29:05.385536 containerd[1500]: 2025-05-12 13:29:05.358 [INFO][4489] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 12 13:29:05.385536 containerd[1500]: 2025-05-12 13:29:05.358 [INFO][4489] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="4a24f84fbef38ee1d1eacaf9ad87480f294281125900502e59077fd19e21c19c" HandleID="k8s-pod-network.4a24f84fbef38ee1d1eacaf9ad87480f294281125900502e59077fd19e21c19c" Workload="localhost-k8s-calico--apiserver--565df67cbb--w69v8-eth0" May 12 13:29:05.386194 containerd[1500]: 2025-05-12 13:29:05.361 [INFO][4456] cni-plugin/k8s.go 386: Populated endpoint ContainerID="4a24f84fbef38ee1d1eacaf9ad87480f294281125900502e59077fd19e21c19c" Namespace="calico-apiserver" Pod="calico-apiserver-565df67cbb-w69v8" WorkloadEndpoint="localhost-k8s-calico--apiserver--565df67cbb--w69v8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--565df67cbb--w69v8-eth0", GenerateName:"calico-apiserver-565df67cbb-", Namespace:"calico-apiserver", SelfLink:"", UID:"0e405ba7-374c-45e2-9ff8-52063ef948ea", ResourceVersion:"736", Generation:0, CreationTimestamp:time.Date(2025, time.May, 12, 13, 28, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"565df67cbb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-565df67cbb-w69v8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali38e3c20a381", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 12 13:29:05.386194 containerd[1500]: 2025-05-12 13:29:05.362 [INFO][4456] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="4a24f84fbef38ee1d1eacaf9ad87480f294281125900502e59077fd19e21c19c" Namespace="calico-apiserver" Pod="calico-apiserver-565df67cbb-w69v8" WorkloadEndpoint="localhost-k8s-calico--apiserver--565df67cbb--w69v8-eth0" May 12 13:29:05.386194 containerd[1500]: 2025-05-12 13:29:05.362 [INFO][4456] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali38e3c20a381 ContainerID="4a24f84fbef38ee1d1eacaf9ad87480f294281125900502e59077fd19e21c19c" Namespace="calico-apiserver" Pod="calico-apiserver-565df67cbb-w69v8" WorkloadEndpoint="localhost-k8s-calico--apiserver--565df67cbb--w69v8-eth0" May 12 13:29:05.386194 containerd[1500]: 2025-05-12 13:29:05.364 [INFO][4456] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4a24f84fbef38ee1d1eacaf9ad87480f294281125900502e59077fd19e21c19c" Namespace="calico-apiserver" Pod="calico-apiserver-565df67cbb-w69v8" WorkloadEndpoint="localhost-k8s-calico--apiserver--565df67cbb--w69v8-eth0" May 12 13:29:05.386194 containerd[1500]: 2025-05-12 13:29:05.365 [INFO][4456] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="4a24f84fbef38ee1d1eacaf9ad87480f294281125900502e59077fd19e21c19c" Namespace="calico-apiserver" Pod="calico-apiserver-565df67cbb-w69v8" WorkloadEndpoint="localhost-k8s-calico--apiserver--565df67cbb--w69v8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--565df67cbb--w69v8-eth0", GenerateName:"calico-apiserver-565df67cbb-", Namespace:"calico-apiserver", SelfLink:"", UID:"0e405ba7-374c-45e2-9ff8-52063ef948ea", ResourceVersion:"736", Generation:0, CreationTimestamp:time.Date(2025, time.May, 12, 13, 28, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"565df67cbb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4a24f84fbef38ee1d1eacaf9ad87480f294281125900502e59077fd19e21c19c", Pod:"calico-apiserver-565df67cbb-w69v8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali38e3c20a381", MAC:"26:6a:0d:67:80:a7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 12 13:29:05.386194 containerd[1500]: 2025-05-12 13:29:05.382 [INFO][4456] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="4a24f84fbef38ee1d1eacaf9ad87480f294281125900502e59077fd19e21c19c" Namespace="calico-apiserver" Pod="calico-apiserver-565df67cbb-w69v8" WorkloadEndpoint="localhost-k8s-calico--apiserver--565df67cbb--w69v8-eth0" May 12 13:29:05.417419 systemd-networkd[1407]: calic972e87dbed: Link UP May 12 13:29:05.418160 systemd-networkd[1407]: calic972e87dbed: Gained carrier May 12 13:29:05.424213 containerd[1500]: time="2025-05-12T13:29:05.423679374Z" level=info msg="connecting to shim 4a24f84fbef38ee1d1eacaf9ad87480f294281125900502e59077fd19e21c19c" address="unix:///run/containerd/s/1a22d570410268076d0fbd8d113313dee68dcd5e3a1f44259403691f12b5724a" namespace=k8s.io protocol=ttrpc version=3 May 12 13:29:05.447619 containerd[1500]: 2025-05-12 13:29:05.275 [INFO][4441] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7db6d8ff4d--5xc5z-eth0 coredns-7db6d8ff4d- kube-system 22ec1171-e30f-460d-b307-c9c15268f694 731 0 2025-05-12 13:28:36 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7db6d8ff4d-5xc5z eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calic972e87dbed [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="d62248d0662d8b8224ef3ef37f8c3ade50761f7eb70f9e8bb45c3d3150436f3c" Namespace="kube-system" Pod="coredns-7db6d8ff4d-5xc5z" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--5xc5z-" May 12 13:29:05.447619 containerd[1500]: 2025-05-12 13:29:05.275 [INFO][4441] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="d62248d0662d8b8224ef3ef37f8c3ade50761f7eb70f9e8bb45c3d3150436f3c" Namespace="kube-system" Pod="coredns-7db6d8ff4d-5xc5z" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--5xc5z-eth0" May 12 13:29:05.447619 containerd[1500]: 2025-05-12 13:29:05.318 [INFO][4487] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d62248d0662d8b8224ef3ef37f8c3ade50761f7eb70f9e8bb45c3d3150436f3c" HandleID="k8s-pod-network.d62248d0662d8b8224ef3ef37f8c3ade50761f7eb70f9e8bb45c3d3150436f3c" Workload="localhost-k8s-coredns--7db6d8ff4d--5xc5z-eth0" May 12 13:29:05.447619 containerd[1500]: 2025-05-12 13:29:05.332 [INFO][4487] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d62248d0662d8b8224ef3ef37f8c3ade50761f7eb70f9e8bb45c3d3150436f3c" HandleID="k8s-pod-network.d62248d0662d8b8224ef3ef37f8c3ade50761f7eb70f9e8bb45c3d3150436f3c" Workload="localhost-k8s-coredns--7db6d8ff4d--5xc5z-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400030caa0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7db6d8ff4d-5xc5z", "timestamp":"2025-05-12 13:29:05.318572669 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 12 13:29:05.447619 containerd[1500]: 2025-05-12 13:29:05.332 [INFO][4487] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 12 13:29:05.447619 containerd[1500]: 2025-05-12 13:29:05.358 [INFO][4487] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 12 13:29:05.447619 containerd[1500]: 2025-05-12 13:29:05.358 [INFO][4487] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 12 13:29:05.447619 containerd[1500]: 2025-05-12 13:29:05.360 [INFO][4487] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.d62248d0662d8b8224ef3ef37f8c3ade50761f7eb70f9e8bb45c3d3150436f3c" host="localhost" May 12 13:29:05.447619 containerd[1500]: 2025-05-12 13:29:05.367 [INFO][4487] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 12 13:29:05.447619 containerd[1500]: 2025-05-12 13:29:05.381 [INFO][4487] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 12 13:29:05.447619 containerd[1500]: 2025-05-12 13:29:05.384 [INFO][4487] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 12 13:29:05.447619 containerd[1500]: 2025-05-12 13:29:05.389 [INFO][4487] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 12 13:29:05.447619 containerd[1500]: 2025-05-12 13:29:05.389 [INFO][4487] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d62248d0662d8b8224ef3ef37f8c3ade50761f7eb70f9e8bb45c3d3150436f3c" host="localhost" May 12 13:29:05.447619 containerd[1500]: 2025-05-12 13:29:05.391 [INFO][4487] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.d62248d0662d8b8224ef3ef37f8c3ade50761f7eb70f9e8bb45c3d3150436f3c May 12 13:29:05.447619 containerd[1500]: 2025-05-12 13:29:05.395 [INFO][4487] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d62248d0662d8b8224ef3ef37f8c3ade50761f7eb70f9e8bb45c3d3150436f3c" host="localhost" May 12 13:29:05.447619 containerd[1500]: 2025-05-12 13:29:05.402 [INFO][4487] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.d62248d0662d8b8224ef3ef37f8c3ade50761f7eb70f9e8bb45c3d3150436f3c" host="localhost" May 12 13:29:05.447619 containerd[1500]: 2025-05-12 13:29:05.402 [INFO][4487] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.d62248d0662d8b8224ef3ef37f8c3ade50761f7eb70f9e8bb45c3d3150436f3c" host="localhost" May 12 13:29:05.447619 containerd[1500]: 2025-05-12 13:29:05.402 [INFO][4487] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 12 13:29:05.447619 containerd[1500]: 2025-05-12 13:29:05.402 [INFO][4487] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="d62248d0662d8b8224ef3ef37f8c3ade50761f7eb70f9e8bb45c3d3150436f3c" HandleID="k8s-pod-network.d62248d0662d8b8224ef3ef37f8c3ade50761f7eb70f9e8bb45c3d3150436f3c" Workload="localhost-k8s-coredns--7db6d8ff4d--5xc5z-eth0" May 12 13:29:05.448573 containerd[1500]: 2025-05-12 13:29:05.413 [INFO][4441] cni-plugin/k8s.go 386: Populated endpoint ContainerID="d62248d0662d8b8224ef3ef37f8c3ade50761f7eb70f9e8bb45c3d3150436f3c" Namespace="kube-system" Pod="coredns-7db6d8ff4d-5xc5z" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--5xc5z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--5xc5z-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"22ec1171-e30f-460d-b307-c9c15268f694", ResourceVersion:"731", Generation:0, CreationTimestamp:time.Date(2025, time.May, 12, 13, 28, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7db6d8ff4d-5xc5z", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic972e87dbed", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 12 13:29:05.448573 containerd[1500]: 2025-05-12 13:29:05.413 [INFO][4441] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="d62248d0662d8b8224ef3ef37f8c3ade50761f7eb70f9e8bb45c3d3150436f3c" Namespace="kube-system" Pod="coredns-7db6d8ff4d-5xc5z" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--5xc5z-eth0" May 12 13:29:05.448573 containerd[1500]: 2025-05-12 13:29:05.413 [INFO][4441] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic972e87dbed ContainerID="d62248d0662d8b8224ef3ef37f8c3ade50761f7eb70f9e8bb45c3d3150436f3c" Namespace="kube-system" Pod="coredns-7db6d8ff4d-5xc5z" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--5xc5z-eth0" May 12 13:29:05.448573 containerd[1500]: 2025-05-12 13:29:05.418 [INFO][4441] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d62248d0662d8b8224ef3ef37f8c3ade50761f7eb70f9e8bb45c3d3150436f3c" Namespace="kube-system" Pod="coredns-7db6d8ff4d-5xc5z" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--5xc5z-eth0" May 12 13:29:05.448573 containerd[1500]: 2025-05-12 13:29:05.420 [INFO][4441] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="d62248d0662d8b8224ef3ef37f8c3ade50761f7eb70f9e8bb45c3d3150436f3c" Namespace="kube-system" Pod="coredns-7db6d8ff4d-5xc5z" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--5xc5z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--5xc5z-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"22ec1171-e30f-460d-b307-c9c15268f694", ResourceVersion:"731", Generation:0, CreationTimestamp:time.Date(2025, time.May, 12, 13, 28, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d62248d0662d8b8224ef3ef37f8c3ade50761f7eb70f9e8bb45c3d3150436f3c", Pod:"coredns-7db6d8ff4d-5xc5z", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic972e87dbed", MAC:"e2:b2:ff:f5:10:a1", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 12 13:29:05.448573 containerd[1500]: 2025-05-12 13:29:05.438 [INFO][4441] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="d62248d0662d8b8224ef3ef37f8c3ade50761f7eb70f9e8bb45c3d3150436f3c" Namespace="kube-system" Pod="coredns-7db6d8ff4d-5xc5z" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--5xc5z-eth0" May 12 13:29:05.460250 systemd-networkd[1407]: cali83085989889: Link UP May 12 13:29:05.461357 systemd[1]: Started cri-containerd-4a24f84fbef38ee1d1eacaf9ad87480f294281125900502e59077fd19e21c19c.scope - libcontainer container 4a24f84fbef38ee1d1eacaf9ad87480f294281125900502e59077fd19e21c19c. May 12 13:29:05.462980 systemd-networkd[1407]: cali83085989889: Gained carrier May 12 13:29:05.478052 systemd-resolved[1408]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 12 13:29:05.482121 containerd[1500]: 2025-05-12 13:29:05.290 [INFO][4468] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5599775c9--pcf4c-eth0 calico-apiserver-5599775c9- calico-apiserver 2d751f48-caa2-4e41-89a9-f6a5fe02729e 734 0 2025-05-12 13:28:42 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5599775c9 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5599775c9-pcf4c eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali83085989889 [] []}} ContainerID="884389e5f2ee352f34c68abafabf37a8340e7298e197461fb8b0579d2213954d" Namespace="calico-apiserver" Pod="calico-apiserver-5599775c9-pcf4c" WorkloadEndpoint="localhost-k8s-calico--apiserver--5599775c9--pcf4c-" May 12 13:29:05.482121 containerd[1500]: 2025-05-12 13:29:05.290 [INFO][4468] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="884389e5f2ee352f34c68abafabf37a8340e7298e197461fb8b0579d2213954d" Namespace="calico-apiserver" Pod="calico-apiserver-5599775c9-pcf4c" WorkloadEndpoint="localhost-k8s-calico--apiserver--5599775c9--pcf4c-eth0" May 12 13:29:05.482121 containerd[1500]: 2025-05-12 13:29:05.337 [INFO][4499] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="884389e5f2ee352f34c68abafabf37a8340e7298e197461fb8b0579d2213954d" HandleID="k8s-pod-network.884389e5f2ee352f34c68abafabf37a8340e7298e197461fb8b0579d2213954d" Workload="localhost-k8s-calico--apiserver--5599775c9--pcf4c-eth0" May 12 13:29:05.482121 containerd[1500]: 2025-05-12 13:29:05.348 [INFO][4499] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="884389e5f2ee352f34c68abafabf37a8340e7298e197461fb8b0579d2213954d" HandleID="k8s-pod-network.884389e5f2ee352f34c68abafabf37a8340e7298e197461fb8b0579d2213954d" Workload="localhost-k8s-calico--apiserver--5599775c9--pcf4c-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000428ef0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-5599775c9-pcf4c", "timestamp":"2025-05-12 13:29:05.337065175 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 12 13:29:05.482121 containerd[1500]: 2025-05-12 13:29:05.348 [INFO][4499] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 12 13:29:05.482121 containerd[1500]: 2025-05-12 13:29:05.402 [INFO][4499] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 12 13:29:05.482121 containerd[1500]: 2025-05-12 13:29:05.402 [INFO][4499] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 12 13:29:05.482121 containerd[1500]: 2025-05-12 13:29:05.405 [INFO][4499] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.884389e5f2ee352f34c68abafabf37a8340e7298e197461fb8b0579d2213954d" host="localhost" May 12 13:29:05.482121 containerd[1500]: 2025-05-12 13:29:05.412 [INFO][4499] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 12 13:29:05.482121 containerd[1500]: 2025-05-12 13:29:05.418 [INFO][4499] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 12 13:29:05.482121 containerd[1500]: 2025-05-12 13:29:05.420 [INFO][4499] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 12 13:29:05.482121 containerd[1500]: 2025-05-12 13:29:05.425 [INFO][4499] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 12 13:29:05.482121 containerd[1500]: 2025-05-12 13:29:05.425 [INFO][4499] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.884389e5f2ee352f34c68abafabf37a8340e7298e197461fb8b0579d2213954d" host="localhost" May 12 13:29:05.482121 containerd[1500]: 2025-05-12 13:29:05.428 [INFO][4499] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.884389e5f2ee352f34c68abafabf37a8340e7298e197461fb8b0579d2213954d May 12 13:29:05.482121 containerd[1500]: 2025-05-12 13:29:05.435 [INFO][4499] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.884389e5f2ee352f34c68abafabf37a8340e7298e197461fb8b0579d2213954d" host="localhost" May 12 13:29:05.482121 containerd[1500]: 2025-05-12 13:29:05.452 [INFO][4499] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.884389e5f2ee352f34c68abafabf37a8340e7298e197461fb8b0579d2213954d" host="localhost" May 12 13:29:05.482121 containerd[1500]: 2025-05-12 13:29:05.452 [INFO][4499] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.884389e5f2ee352f34c68abafabf37a8340e7298e197461fb8b0579d2213954d" host="localhost" May 12 13:29:05.482121 containerd[1500]: 2025-05-12 13:29:05.452 [INFO][4499] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 12 13:29:05.482121 containerd[1500]: 2025-05-12 13:29:05.452 [INFO][4499] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="884389e5f2ee352f34c68abafabf37a8340e7298e197461fb8b0579d2213954d" HandleID="k8s-pod-network.884389e5f2ee352f34c68abafabf37a8340e7298e197461fb8b0579d2213954d" Workload="localhost-k8s-calico--apiserver--5599775c9--pcf4c-eth0" May 12 13:29:05.482608 containerd[1500]: 2025-05-12 13:29:05.457 [INFO][4468] cni-plugin/k8s.go 386: Populated endpoint ContainerID="884389e5f2ee352f34c68abafabf37a8340e7298e197461fb8b0579d2213954d" Namespace="calico-apiserver" Pod="calico-apiserver-5599775c9-pcf4c" WorkloadEndpoint="localhost-k8s-calico--apiserver--5599775c9--pcf4c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5599775c9--pcf4c-eth0", GenerateName:"calico-apiserver-5599775c9-", Namespace:"calico-apiserver", SelfLink:"", UID:"2d751f48-caa2-4e41-89a9-f6a5fe02729e", ResourceVersion:"734", Generation:0, CreationTimestamp:time.Date(2025, time.May, 12, 13, 28, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5599775c9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5599775c9-pcf4c", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali83085989889", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 12 13:29:05.482608 containerd[1500]: 2025-05-12 13:29:05.457 [INFO][4468] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.134/32] ContainerID="884389e5f2ee352f34c68abafabf37a8340e7298e197461fb8b0579d2213954d" Namespace="calico-apiserver" Pod="calico-apiserver-5599775c9-pcf4c" WorkloadEndpoint="localhost-k8s-calico--apiserver--5599775c9--pcf4c-eth0" May 12 13:29:05.482608 containerd[1500]: 2025-05-12 13:29:05.457 [INFO][4468] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali83085989889 ContainerID="884389e5f2ee352f34c68abafabf37a8340e7298e197461fb8b0579d2213954d" Namespace="calico-apiserver" Pod="calico-apiserver-5599775c9-pcf4c" WorkloadEndpoint="localhost-k8s-calico--apiserver--5599775c9--pcf4c-eth0" May 12 13:29:05.482608 containerd[1500]: 2025-05-12 13:29:05.463 [INFO][4468] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="884389e5f2ee352f34c68abafabf37a8340e7298e197461fb8b0579d2213954d" Namespace="calico-apiserver" Pod="calico-apiserver-5599775c9-pcf4c" WorkloadEndpoint="localhost-k8s-calico--apiserver--5599775c9--pcf4c-eth0" May 12 13:29:05.482608 containerd[1500]: 2025-05-12 13:29:05.465 [INFO][4468] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="884389e5f2ee352f34c68abafabf37a8340e7298e197461fb8b0579d2213954d" Namespace="calico-apiserver" Pod="calico-apiserver-5599775c9-pcf4c" WorkloadEndpoint="localhost-k8s-calico--apiserver--5599775c9--pcf4c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5599775c9--pcf4c-eth0", GenerateName:"calico-apiserver-5599775c9-", Namespace:"calico-apiserver", SelfLink:"", UID:"2d751f48-caa2-4e41-89a9-f6a5fe02729e", ResourceVersion:"734", Generation:0, CreationTimestamp:time.Date(2025, time.May, 12, 13, 28, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5599775c9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"884389e5f2ee352f34c68abafabf37a8340e7298e197461fb8b0579d2213954d", Pod:"calico-apiserver-5599775c9-pcf4c", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali83085989889", MAC:"f6:8f:c8:29:31:14", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 12 13:29:05.482608 containerd[1500]: 2025-05-12 13:29:05.476 [INFO][4468] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="884389e5f2ee352f34c68abafabf37a8340e7298e197461fb8b0579d2213954d" Namespace="calico-apiserver" Pod="calico-apiserver-5599775c9-pcf4c" WorkloadEndpoint="localhost-k8s-calico--apiserver--5599775c9--pcf4c-eth0" May 12 13:29:05.511371 containerd[1500]: time="2025-05-12T13:29:05.511098032Z" level=info msg="connecting to shim 884389e5f2ee352f34c68abafabf37a8340e7298e197461fb8b0579d2213954d" address="unix:///run/containerd/s/11236a4052718efc7fde3e8d2c7bbdbff4ee95f9358799b61e308dc1cf75669f" namespace=k8s.io protocol=ttrpc version=3 May 12 13:29:05.512507 containerd[1500]: time="2025-05-12T13:29:05.512217538Z" level=info msg="connecting to shim d62248d0662d8b8224ef3ef37f8c3ade50761f7eb70f9e8bb45c3d3150436f3c" address="unix:///run/containerd/s/03383b013e6ef15d359b0a123cbf627e1ad121ec23e75466ddb9bd4f36951228" namespace=k8s.io protocol=ttrpc version=3 May 12 13:29:05.512507 containerd[1500]: time="2025-05-12T13:29:05.512385942Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-565df67cbb-w69v8,Uid:0e405ba7-374c-45e2-9ff8-52063ef948ea,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"4a24f84fbef38ee1d1eacaf9ad87480f294281125900502e59077fd19e21c19c\"" May 12 13:29:05.537266 systemd[1]: Started cri-containerd-884389e5f2ee352f34c68abafabf37a8340e7298e197461fb8b0579d2213954d.scope - libcontainer container 884389e5f2ee352f34c68abafabf37a8340e7298e197461fb8b0579d2213954d. May 12 13:29:05.543124 systemd[1]: Started cri-containerd-d62248d0662d8b8224ef3ef37f8c3ade50761f7eb70f9e8bb45c3d3150436f3c.scope - libcontainer container d62248d0662d8b8224ef3ef37f8c3ade50761f7eb70f9e8bb45c3d3150436f3c. May 12 13:29:05.550530 systemd-resolved[1408]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 12 13:29:05.559505 systemd-resolved[1408]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 12 13:29:05.572926 containerd[1500]: time="2025-05-12T13:29:05.572622252Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5599775c9-pcf4c,Uid:2d751f48-caa2-4e41-89a9-f6a5fe02729e,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"884389e5f2ee352f34c68abafabf37a8340e7298e197461fb8b0579d2213954d\"" May 12 13:29:05.584936 containerd[1500]: time="2025-05-12T13:29:05.584893215Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-5xc5z,Uid:22ec1171-e30f-460d-b307-c9c15268f694,Namespace:kube-system,Attempt:0,} returns sandbox id \"d62248d0662d8b8224ef3ef37f8c3ade50761f7eb70f9e8bb45c3d3150436f3c\"" May 12 13:29:05.587700 containerd[1500]: time="2025-05-12T13:29:05.587664759Z" level=info msg="CreateContainer within sandbox \"d62248d0662d8b8224ef3ef37f8c3ade50761f7eb70f9e8bb45c3d3150436f3c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 12 13:29:05.593228 containerd[1500]: time="2025-05-12T13:29:05.593184006Z" level=info msg="Container b1420ae196649627cfeabe88aa8ac08df38b487d2a34524363711ae032fe477f: CDI devices from CRI Config.CDIDevices: []" May 12 13:29:05.598237 containerd[1500]: time="2025-05-12T13:29:05.598192842Z" level=info msg="CreateContainer within sandbox \"d62248d0662d8b8224ef3ef37f8c3ade50761f7eb70f9e8bb45c3d3150436f3c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b1420ae196649627cfeabe88aa8ac08df38b487d2a34524363711ae032fe477f\"" May 12 13:29:05.600193 containerd[1500]: time="2025-05-12T13:29:05.599968403Z" level=info msg="StartContainer for \"b1420ae196649627cfeabe88aa8ac08df38b487d2a34524363711ae032fe477f\"" May 12 13:29:05.601013 containerd[1500]: time="2025-05-12T13:29:05.600890544Z" level=info msg="connecting to shim b1420ae196649627cfeabe88aa8ac08df38b487d2a34524363711ae032fe477f" address="unix:///run/containerd/s/03383b013e6ef15d359b0a123cbf627e1ad121ec23e75466ddb9bd4f36951228" protocol=ttrpc version=3 May 12 13:29:05.626126 systemd[1]: Started cri-containerd-b1420ae196649627cfeabe88aa8ac08df38b487d2a34524363711ae032fe477f.scope - libcontainer container b1420ae196649627cfeabe88aa8ac08df38b487d2a34524363711ae032fe477f. May 12 13:29:05.665047 containerd[1500]: time="2025-05-12T13:29:05.665012824Z" level=info msg="StartContainer for \"b1420ae196649627cfeabe88aa8ac08df38b487d2a34524363711ae032fe477f\" returns successfully" May 12 13:29:05.786890 containerd[1500]: time="2025-05-12T13:29:05.786124779Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 13:29:05.786890 containerd[1500]: time="2025-05-12T13:29:05.786744793Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.3: active requests=0, bytes read=7474935" May 12 13:29:05.787548 containerd[1500]: time="2025-05-12T13:29:05.787519091Z" level=info msg="ImageCreate event name:\"sha256:15faf29e8b518d846c91c15785ff89e783d356ea0f2b22826f47a556ea32645b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 13:29:05.789744 containerd[1500]: time="2025-05-12T13:29:05.789702382Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.3\" with image id \"sha256:15faf29e8b518d846c91c15785ff89e783d356ea0f2b22826f47a556ea32645b\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\", size \"8844117\" in 1.299538841s" May 12 13:29:05.789744 containerd[1500]: time="2025-05-12T13:29:05.789736862Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\" returns image reference \"sha256:15faf29e8b518d846c91c15785ff89e783d356ea0f2b22826f47a556ea32645b\"" May 12 13:29:05.790809 containerd[1500]: time="2025-05-12T13:29:05.790775006Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 13:29:05.792323 containerd[1500]: time="2025-05-12T13:29:05.792288561Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\"" May 12 13:29:05.794050 containerd[1500]: time="2025-05-12T13:29:05.793540790Z" level=info msg="CreateContainer within sandbox \"eecde385363eba38527ccd001bc90f36458087c2c5d960f73a3cbd9ac430d33e\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" May 12 13:29:05.824302 containerd[1500]: time="2025-05-12T13:29:05.824239499Z" level=info msg="Container e5ca8181c418ec43cf64d2a1531783d9e994130c24a13512b68256e6ccc61367: CDI devices from CRI Config.CDIDevices: []" May 12 13:29:05.834257 containerd[1500]: time="2025-05-12T13:29:05.834196568Z" level=info msg="CreateContainer within sandbox \"eecde385363eba38527ccd001bc90f36458087c2c5d960f73a3cbd9ac430d33e\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"e5ca8181c418ec43cf64d2a1531783d9e994130c24a13512b68256e6ccc61367\"" May 12 13:29:05.834900 containerd[1500]: time="2025-05-12T13:29:05.834763102Z" level=info msg="StartContainer for \"e5ca8181c418ec43cf64d2a1531783d9e994130c24a13512b68256e6ccc61367\"" May 12 13:29:05.836150 containerd[1500]: time="2025-05-12T13:29:05.836124373Z" level=info msg="connecting to shim e5ca8181c418ec43cf64d2a1531783d9e994130c24a13512b68256e6ccc61367" address="unix:///run/containerd/s/2ba4cd4f511d011b36cb0517af1dd11d826fb8bda00b6299353ba3bd3d8026b9" protocol=ttrpc version=3 May 12 13:29:05.857159 systemd[1]: Started cri-containerd-e5ca8181c418ec43cf64d2a1531783d9e994130c24a13512b68256e6ccc61367.scope - libcontainer container e5ca8181c418ec43cf64d2a1531783d9e994130c24a13512b68256e6ccc61367. May 12 13:29:05.895257 containerd[1500]: time="2025-05-12T13:29:05.895217777Z" level=info msg="StartContainer for \"e5ca8181c418ec43cf64d2a1531783d9e994130c24a13512b68256e6ccc61367\" returns successfully" May 12 13:29:06.114208 systemd-networkd[1407]: calidb40e5bc30b: Gained IPv6LL May 12 13:29:06.229376 containerd[1500]: time="2025-05-12T13:29:06.229150018Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5599775c9-c8whr,Uid:223e65e8-dd23-4599-ac1d-826d7fd16ccd,Namespace:calico-apiserver,Attempt:0,}" May 12 13:29:06.306534 systemd-networkd[1407]: calie53a5149e76: Gained IPv6LL May 12 13:29:06.337959 systemd-networkd[1407]: calic595ad36a05: Link UP May 12 13:29:06.339017 systemd-networkd[1407]: calic595ad36a05: Gained carrier May 12 13:29:06.349935 containerd[1500]: 2025-05-12 13:29:06.266 [INFO][4757] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5599775c9--c8whr-eth0 calico-apiserver-5599775c9- calico-apiserver 223e65e8-dd23-4599-ac1d-826d7fd16ccd 733 0 2025-05-12 13:28:42 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5599775c9 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5599775c9-c8whr eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calic595ad36a05 [] []}} ContainerID="19b15be64a38b37c84489627a4a79a55db7ff9846d31361c9cf24f91c3f1c821" Namespace="calico-apiserver" Pod="calico-apiserver-5599775c9-c8whr" WorkloadEndpoint="localhost-k8s-calico--apiserver--5599775c9--c8whr-" May 12 13:29:06.349935 containerd[1500]: 2025-05-12 13:29:06.266 [INFO][4757] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="19b15be64a38b37c84489627a4a79a55db7ff9846d31361c9cf24f91c3f1c821" Namespace="calico-apiserver" Pod="calico-apiserver-5599775c9-c8whr" WorkloadEndpoint="localhost-k8s-calico--apiserver--5599775c9--c8whr-eth0" May 12 13:29:06.349935 containerd[1500]: 2025-05-12 13:29:06.298 [INFO][4770] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="19b15be64a38b37c84489627a4a79a55db7ff9846d31361c9cf24f91c3f1c821" HandleID="k8s-pod-network.19b15be64a38b37c84489627a4a79a55db7ff9846d31361c9cf24f91c3f1c821" Workload="localhost-k8s-calico--apiserver--5599775c9--c8whr-eth0" May 12 13:29:06.349935 containerd[1500]: 2025-05-12 13:29:06.308 [INFO][4770] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="19b15be64a38b37c84489627a4a79a55db7ff9846d31361c9cf24f91c3f1c821" HandleID="k8s-pod-network.19b15be64a38b37c84489627a4a79a55db7ff9846d31361c9cf24f91c3f1c821" Workload="localhost-k8s-calico--apiserver--5599775c9--c8whr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000502850), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-5599775c9-c8whr", "timestamp":"2025-05-12 13:29:06.297988136 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 12 13:29:06.349935 containerd[1500]: 2025-05-12 13:29:06.308 [INFO][4770] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 12 13:29:06.349935 containerd[1500]: 2025-05-12 13:29:06.308 [INFO][4770] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 12 13:29:06.349935 containerd[1500]: 2025-05-12 13:29:06.308 [INFO][4770] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 12 13:29:06.349935 containerd[1500]: 2025-05-12 13:29:06.310 [INFO][4770] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.19b15be64a38b37c84489627a4a79a55db7ff9846d31361c9cf24f91c3f1c821" host="localhost" May 12 13:29:06.349935 containerd[1500]: 2025-05-12 13:29:06.313 [INFO][4770] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 12 13:29:06.349935 containerd[1500]: 2025-05-12 13:29:06.317 [INFO][4770] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 12 13:29:06.349935 containerd[1500]: 2025-05-12 13:29:06.319 [INFO][4770] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 12 13:29:06.349935 containerd[1500]: 2025-05-12 13:29:06.321 [INFO][4770] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 12 13:29:06.349935 containerd[1500]: 2025-05-12 13:29:06.321 [INFO][4770] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.19b15be64a38b37c84489627a4a79a55db7ff9846d31361c9cf24f91c3f1c821" host="localhost" May 12 13:29:06.349935 containerd[1500]: 2025-05-12 13:29:06.322 [INFO][4770] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.19b15be64a38b37c84489627a4a79a55db7ff9846d31361c9cf24f91c3f1c821 May 12 13:29:06.349935 containerd[1500]: 2025-05-12 13:29:06.326 [INFO][4770] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.19b15be64a38b37c84489627a4a79a55db7ff9846d31361c9cf24f91c3f1c821" host="localhost" May 12 13:29:06.349935 containerd[1500]: 2025-05-12 13:29:06.332 [INFO][4770] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.19b15be64a38b37c84489627a4a79a55db7ff9846d31361c9cf24f91c3f1c821" host="localhost" May 12 13:29:06.349935 containerd[1500]: 2025-05-12 13:29:06.332 [INFO][4770] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.19b15be64a38b37c84489627a4a79a55db7ff9846d31361c9cf24f91c3f1c821" host="localhost" May 12 13:29:06.349935 containerd[1500]: 2025-05-12 13:29:06.332 [INFO][4770] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 12 13:29:06.349935 containerd[1500]: 2025-05-12 13:29:06.332 [INFO][4770] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="19b15be64a38b37c84489627a4a79a55db7ff9846d31361c9cf24f91c3f1c821" HandleID="k8s-pod-network.19b15be64a38b37c84489627a4a79a55db7ff9846d31361c9cf24f91c3f1c821" Workload="localhost-k8s-calico--apiserver--5599775c9--c8whr-eth0" May 12 13:29:06.350595 containerd[1500]: 2025-05-12 13:29:06.335 [INFO][4757] cni-plugin/k8s.go 386: Populated endpoint ContainerID="19b15be64a38b37c84489627a4a79a55db7ff9846d31361c9cf24f91c3f1c821" Namespace="calico-apiserver" Pod="calico-apiserver-5599775c9-c8whr" WorkloadEndpoint="localhost-k8s-calico--apiserver--5599775c9--c8whr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5599775c9--c8whr-eth0", GenerateName:"calico-apiserver-5599775c9-", Namespace:"calico-apiserver", SelfLink:"", UID:"223e65e8-dd23-4599-ac1d-826d7fd16ccd", ResourceVersion:"733", Generation:0, CreationTimestamp:time.Date(2025, time.May, 12, 13, 28, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5599775c9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5599775c9-c8whr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic595ad36a05", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 12 13:29:06.350595 containerd[1500]: 2025-05-12 13:29:06.335 [INFO][4757] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.135/32] ContainerID="19b15be64a38b37c84489627a4a79a55db7ff9846d31361c9cf24f91c3f1c821" Namespace="calico-apiserver" Pod="calico-apiserver-5599775c9-c8whr" WorkloadEndpoint="localhost-k8s-calico--apiserver--5599775c9--c8whr-eth0" May 12 13:29:06.350595 containerd[1500]: 2025-05-12 13:29:06.335 [INFO][4757] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic595ad36a05 ContainerID="19b15be64a38b37c84489627a4a79a55db7ff9846d31361c9cf24f91c3f1c821" Namespace="calico-apiserver" Pod="calico-apiserver-5599775c9-c8whr" WorkloadEndpoint="localhost-k8s-calico--apiserver--5599775c9--c8whr-eth0" May 12 13:29:06.350595 containerd[1500]: 2025-05-12 13:29:06.337 [INFO][4757] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="19b15be64a38b37c84489627a4a79a55db7ff9846d31361c9cf24f91c3f1c821" Namespace="calico-apiserver" Pod="calico-apiserver-5599775c9-c8whr" WorkloadEndpoint="localhost-k8s-calico--apiserver--5599775c9--c8whr-eth0" May 12 13:29:06.350595 containerd[1500]: 2025-05-12 13:29:06.338 [INFO][4757] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="19b15be64a38b37c84489627a4a79a55db7ff9846d31361c9cf24f91c3f1c821" Namespace="calico-apiserver" Pod="calico-apiserver-5599775c9-c8whr" WorkloadEndpoint="localhost-k8s-calico--apiserver--5599775c9--c8whr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5599775c9--c8whr-eth0", GenerateName:"calico-apiserver-5599775c9-", Namespace:"calico-apiserver", SelfLink:"", UID:"223e65e8-dd23-4599-ac1d-826d7fd16ccd", ResourceVersion:"733", Generation:0, CreationTimestamp:time.Date(2025, time.May, 12, 13, 28, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5599775c9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"19b15be64a38b37c84489627a4a79a55db7ff9846d31361c9cf24f91c3f1c821", Pod:"calico-apiserver-5599775c9-c8whr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic595ad36a05", MAC:"da:27:db:2c:71:f5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 12 13:29:06.350595 containerd[1500]: 2025-05-12 13:29:06.347 [INFO][4757] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="19b15be64a38b37c84489627a4a79a55db7ff9846d31361c9cf24f91c3f1c821" Namespace="calico-apiserver" Pod="calico-apiserver-5599775c9-c8whr" WorkloadEndpoint="localhost-k8s-calico--apiserver--5599775c9--c8whr-eth0" May 12 13:29:06.369183 containerd[1500]: time="2025-05-12T13:29:06.368576612Z" level=info msg="connecting to shim 19b15be64a38b37c84489627a4a79a55db7ff9846d31361c9cf24f91c3f1c821" address="unix:///run/containerd/s/e1bbebcf54b0260f453b4d84be4f206dd14b3182c67c14d7d5d763cdc6dfe274" namespace=k8s.io protocol=ttrpc version=3 May 12 13:29:06.394121 systemd[1]: Started cri-containerd-19b15be64a38b37c84489627a4a79a55db7ff9846d31361c9cf24f91c3f1c821.scope - libcontainer container 19b15be64a38b37c84489627a4a79a55db7ff9846d31361c9cf24f91c3f1c821. May 12 13:29:06.410097 systemd-resolved[1408]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 12 13:29:06.434681 systemd[1]: Started sshd@12-10.0.0.76:22-10.0.0.1:34238.service - OpenSSH per-connection server daemon (10.0.0.1:34238). May 12 13:29:06.449543 containerd[1500]: time="2025-05-12T13:29:06.449498523Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5599775c9-c8whr,Uid:223e65e8-dd23-4599-ac1d-826d7fd16ccd,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"19b15be64a38b37c84489627a4a79a55db7ff9846d31361c9cf24f91c3f1c821\"" May 12 13:29:06.470667 kubelet[2734]: I0512 13:29:06.469550 2734 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-5xc5z" podStartSLOduration=30.469534856 podStartE2EDuration="30.469534856s" podCreationTimestamp="2025-05-12 13:28:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-12 13:29:06.469093526 +0000 UTC m=+47.321467139" watchObservedRunningTime="2025-05-12 13:29:06.469534856 +0000 UTC m=+47.321908469" May 12 13:29:06.498112 systemd-networkd[1407]: cali38e3c20a381: Gained IPv6LL May 12 13:29:06.499795 sshd[4841]: Accepted publickey for core from 10.0.0.1 port 34238 ssh2: RSA SHA256:jEPoW5jmVqQGUqKP3XswdpHQkuwhsPJWJAB8YbEjhZ8 May 12 13:29:06.501384 sshd-session[4841]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 12 13:29:06.507806 systemd-logind[1472]: New session 13 of user core. May 12 13:29:06.512174 systemd[1]: Started session-13.scope - Session 13 of User core. May 12 13:29:06.724594 sshd[4848]: Connection closed by 10.0.0.1 port 34238 May 12 13:29:06.724924 sshd-session[4841]: pam_unix(sshd:session): session closed for user core May 12 13:29:06.738687 systemd[1]: sshd@12-10.0.0.76:22-10.0.0.1:34238.service: Deactivated successfully. May 12 13:29:06.740592 systemd[1]: session-13.scope: Deactivated successfully. May 12 13:29:06.741452 systemd-logind[1472]: Session 13 logged out. Waiting for processes to exit. May 12 13:29:06.744225 systemd[1]: Started sshd@13-10.0.0.76:22-10.0.0.1:34244.service - OpenSSH per-connection server daemon (10.0.0.1:34244). May 12 13:29:06.747009 systemd-logind[1472]: Removed session 13. May 12 13:29:06.792921 sshd[4861]: Accepted publickey for core from 10.0.0.1 port 34244 ssh2: RSA SHA256:jEPoW5jmVqQGUqKP3XswdpHQkuwhsPJWJAB8YbEjhZ8 May 12 13:29:06.794040 sshd-session[4861]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 12 13:29:06.798147 systemd-logind[1472]: New session 14 of user core. May 12 13:29:06.806094 systemd[1]: Started session-14.scope - Session 14 of User core. May 12 13:29:07.071625 sshd[4866]: Connection closed by 10.0.0.1 port 34244 May 12 13:29:07.069358 sshd-session[4861]: pam_unix(sshd:session): session closed for user core May 12 13:29:07.089184 systemd[1]: sshd@13-10.0.0.76:22-10.0.0.1:34244.service: Deactivated successfully. May 12 13:29:07.092231 systemd[1]: session-14.scope: Deactivated successfully. May 12 13:29:07.094653 systemd-logind[1472]: Session 14 logged out. Waiting for processes to exit. May 12 13:29:07.095380 systemd[1]: Started sshd@14-10.0.0.76:22-10.0.0.1:34246.service - OpenSSH per-connection server daemon (10.0.0.1:34246). May 12 13:29:07.097859 systemd-logind[1472]: Removed session 14. May 12 13:29:07.183561 sshd[4876]: Accepted publickey for core from 10.0.0.1 port 34246 ssh2: RSA SHA256:jEPoW5jmVqQGUqKP3XswdpHQkuwhsPJWJAB8YbEjhZ8 May 12 13:29:07.184530 sshd-session[4876]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 12 13:29:07.190005 systemd-logind[1472]: New session 15 of user core. May 12 13:29:07.199104 systemd[1]: Started session-15.scope - Session 15 of User core. May 12 13:29:07.394131 systemd-networkd[1407]: cali83085989889: Gained IPv6LL May 12 13:29:07.459106 systemd-networkd[1407]: calic972e87dbed: Gained IPv6LL May 12 13:29:07.477723 containerd[1500]: time="2025-05-12T13:29:07.477686534Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 13:29:07.478533 containerd[1500]: time="2025-05-12T13:29:07.478119503Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.3: active requests=0, bytes read=32554116" May 12 13:29:07.478984 containerd[1500]: time="2025-05-12T13:29:07.478943602Z" level=info msg="ImageCreate event name:\"sha256:ec7c64189a2fd01b24b044fea1840d441e9884a0df32c2e9d6982cfbbea1f814\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 13:29:07.480885 containerd[1500]: time="2025-05-12T13:29:07.480853124Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 13:29:07.482029 containerd[1500]: time="2025-05-12T13:29:07.481540099Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" with image id \"sha256:ec7c64189a2fd01b24b044fea1840d441e9884a0df32c2e9d6982cfbbea1f814\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\", size \"33923266\" in 1.689215697s" May 12 13:29:07.482029 containerd[1500]: time="2025-05-12T13:29:07.481569100Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" returns image reference \"sha256:ec7c64189a2fd01b24b044fea1840d441e9884a0df32c2e9d6982cfbbea1f814\"" May 12 13:29:07.482663 containerd[1500]: time="2025-05-12T13:29:07.482269875Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" May 12 13:29:07.488980 containerd[1500]: time="2025-05-12T13:29:07.488918543Z" level=info msg="CreateContainer within sandbox \"33b7b13c1f42b4db530ef006755530f39cd5e74720f5b96c5d656070585a20f7\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" May 12 13:29:07.498971 containerd[1500]: time="2025-05-12T13:29:07.496056741Z" level=info msg="Container 82e73632afb1eda6d8382e767d7b22521b53cd07af472126bf9aa4803a7912fd: CDI devices from CRI Config.CDIDevices: []" May 12 13:29:07.502611 containerd[1500]: time="2025-05-12T13:29:07.502566806Z" level=info msg="CreateContainer within sandbox \"33b7b13c1f42b4db530ef006755530f39cd5e74720f5b96c5d656070585a20f7\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"82e73632afb1eda6d8382e767d7b22521b53cd07af472126bf9aa4803a7912fd\"" May 12 13:29:07.503054 containerd[1500]: time="2025-05-12T13:29:07.503028456Z" level=info msg="StartContainer for \"82e73632afb1eda6d8382e767d7b22521b53cd07af472126bf9aa4803a7912fd\"" May 12 13:29:07.504260 containerd[1500]: time="2025-05-12T13:29:07.504235283Z" level=info msg="connecting to shim 82e73632afb1eda6d8382e767d7b22521b53cd07af472126bf9aa4803a7912fd" address="unix:///run/containerd/s/f1c8bb21ae525d0c72b369b46322b56635d92ef8406955f9263c89d6e2c59a76" protocol=ttrpc version=3 May 12 13:29:07.527199 systemd[1]: Started cri-containerd-82e73632afb1eda6d8382e767d7b22521b53cd07af472126bf9aa4803a7912fd.scope - libcontainer container 82e73632afb1eda6d8382e767d7b22521b53cd07af472126bf9aa4803a7912fd. May 12 13:29:07.569197 containerd[1500]: time="2025-05-12T13:29:07.569161923Z" level=info msg="StartContainer for \"82e73632afb1eda6d8382e767d7b22521b53cd07af472126bf9aa4803a7912fd\" returns successfully" May 12 13:29:08.355073 systemd-networkd[1407]: calic595ad36a05: Gained IPv6LL May 12 13:29:08.490778 kubelet[2734]: I0512 13:29:08.490622 2734 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6575dcdf85-mqr66" podStartSLOduration=23.514683216999998 podStartE2EDuration="26.490604811s" podCreationTimestamp="2025-05-12 13:28:42 +0000 UTC" firstStartedPulling="2025-05-12 13:29:04.506236359 +0000 UTC m=+45.358609932" lastFinishedPulling="2025-05-12 13:29:07.482157953 +0000 UTC m=+48.334531526" observedRunningTime="2025-05-12 13:29:08.490079279 +0000 UTC m=+49.342452932" watchObservedRunningTime="2025-05-12 13:29:08.490604811 +0000 UTC m=+49.342978424" May 12 13:29:08.513458 containerd[1500]: time="2025-05-12T13:29:08.513412588Z" level=info msg="TaskExit event in podsandbox handler container_id:\"82e73632afb1eda6d8382e767d7b22521b53cd07af472126bf9aa4803a7912fd\" id:\"c2d93e978fad4a23cfd5dad3c6979ef61bda7de78a51bf7de873852d23abafd5\" pid:4941 exited_at:{seconds:1747056548 nanos:513098141}" May 12 13:29:08.772988 sshd[4883]: Connection closed by 10.0.0.1 port 34246 May 12 13:29:08.773808 sshd-session[4876]: pam_unix(sshd:session): session closed for user core May 12 13:29:08.787690 systemd[1]: sshd@14-10.0.0.76:22-10.0.0.1:34246.service: Deactivated successfully. May 12 13:29:08.791038 systemd[1]: session-15.scope: Deactivated successfully. May 12 13:29:08.791216 systemd[1]: session-15.scope: Consumed 517ms CPU time, 66.5M memory peak. May 12 13:29:08.794650 systemd-logind[1472]: Session 15 logged out. Waiting for processes to exit. May 12 13:29:08.796452 systemd[1]: Started sshd@15-10.0.0.76:22-10.0.0.1:34260.service - OpenSSH per-connection server daemon (10.0.0.1:34260). May 12 13:29:08.800132 systemd-logind[1472]: Removed session 15. May 12 13:29:08.865604 sshd[4965]: Accepted publickey for core from 10.0.0.1 port 34260 ssh2: RSA SHA256:jEPoW5jmVqQGUqKP3XswdpHQkuwhsPJWJAB8YbEjhZ8 May 12 13:29:08.867359 sshd-session[4965]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 12 13:29:08.875165 systemd-logind[1472]: New session 16 of user core. May 12 13:29:08.890143 systemd[1]: Started session-16.scope - Session 16 of User core. May 12 13:29:09.286615 sshd[4968]: Connection closed by 10.0.0.1 port 34260 May 12 13:29:09.287054 sshd-session[4965]: pam_unix(sshd:session): session closed for user core May 12 13:29:09.304854 systemd[1]: sshd@15-10.0.0.76:22-10.0.0.1:34260.service: Deactivated successfully. May 12 13:29:09.308417 systemd[1]: session-16.scope: Deactivated successfully. May 12 13:29:09.309958 systemd-logind[1472]: Session 16 logged out. Waiting for processes to exit. May 12 13:29:09.311799 systemd[1]: Started sshd@16-10.0.0.76:22-10.0.0.1:34268.service - OpenSSH per-connection server daemon (10.0.0.1:34268). May 12 13:29:09.312808 systemd-logind[1472]: Removed session 16. May 12 13:29:09.392623 sshd[4984]: Accepted publickey for core from 10.0.0.1 port 34268 ssh2: RSA SHA256:jEPoW5jmVqQGUqKP3XswdpHQkuwhsPJWJAB8YbEjhZ8 May 12 13:29:09.394235 sshd-session[4984]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 12 13:29:09.401169 systemd-logind[1472]: New session 17 of user core. May 12 13:29:09.409087 systemd[1]: Started session-17.scope - Session 17 of User core. May 12 13:29:09.460445 containerd[1500]: time="2025-05-12T13:29:09.460404963Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 13:29:09.461413 containerd[1500]: time="2025-05-12T13:29:09.461389504Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=40247603" May 12 13:29:09.462376 containerd[1500]: time="2025-05-12T13:29:09.462326325Z" level=info msg="ImageCreate event name:\"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 13:29:09.464487 containerd[1500]: time="2025-05-12T13:29:09.464294847Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 13:29:09.465015 containerd[1500]: time="2025-05-12T13:29:09.464980221Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"41616801\" in 1.982680985s" May 12 13:29:09.465064 containerd[1500]: time="2025-05-12T13:29:09.465019422Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\"" May 12 13:29:09.466309 containerd[1500]: time="2025-05-12T13:29:09.466274609Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" May 12 13:29:09.468022 containerd[1500]: time="2025-05-12T13:29:09.467989046Z" level=info msg="CreateContainer within sandbox \"4a24f84fbef38ee1d1eacaf9ad87480f294281125900502e59077fd19e21c19c\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 12 13:29:09.477410 containerd[1500]: time="2025-05-12T13:29:09.476429946Z" level=info msg="Container 3e9f8d25ce8fff52363dcda05dad3bf32f9cd0ddc6d2c612347f67a84fecbeb8: CDI devices from CRI Config.CDIDevices: []" May 12 13:29:09.483910 containerd[1500]: time="2025-05-12T13:29:09.483866186Z" level=info msg="CreateContainer within sandbox \"4a24f84fbef38ee1d1eacaf9ad87480f294281125900502e59077fd19e21c19c\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"3e9f8d25ce8fff52363dcda05dad3bf32f9cd0ddc6d2c612347f67a84fecbeb8\"" May 12 13:29:09.484999 containerd[1500]: time="2025-05-12T13:29:09.484912288Z" level=info msg="StartContainer for \"3e9f8d25ce8fff52363dcda05dad3bf32f9cd0ddc6d2c612347f67a84fecbeb8\"" May 12 13:29:09.485987 containerd[1500]: time="2025-05-12T13:29:09.485922990Z" level=info msg="connecting to shim 3e9f8d25ce8fff52363dcda05dad3bf32f9cd0ddc6d2c612347f67a84fecbeb8" address="unix:///run/containerd/s/1a22d570410268076d0fbd8d113313dee68dcd5e3a1f44259403691f12b5724a" protocol=ttrpc version=3 May 12 13:29:09.509148 systemd[1]: Started cri-containerd-3e9f8d25ce8fff52363dcda05dad3bf32f9cd0ddc6d2c612347f67a84fecbeb8.scope - libcontainer container 3e9f8d25ce8fff52363dcda05dad3bf32f9cd0ddc6d2c612347f67a84fecbeb8. May 12 13:29:09.619255 containerd[1500]: time="2025-05-12T13:29:09.619127961Z" level=info msg="StartContainer for \"3e9f8d25ce8fff52363dcda05dad3bf32f9cd0ddc6d2c612347f67a84fecbeb8\" returns successfully" May 12 13:29:09.687676 sshd[4987]: Connection closed by 10.0.0.1 port 34268 May 12 13:29:09.688120 sshd-session[4984]: pam_unix(sshd:session): session closed for user core May 12 13:29:09.692293 systemd[1]: sshd@16-10.0.0.76:22-10.0.0.1:34268.service: Deactivated successfully. May 12 13:29:09.695896 systemd[1]: session-17.scope: Deactivated successfully. May 12 13:29:09.697145 systemd-logind[1472]: Session 17 logged out. Waiting for processes to exit. May 12 13:29:09.698369 systemd-logind[1472]: Removed session 17. May 12 13:29:09.768551 containerd[1500]: time="2025-05-12T13:29:09.768504038Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 13:29:09.769555 containerd[1500]: time="2025-05-12T13:29:09.769514860Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=77" May 12 13:29:09.771396 containerd[1500]: time="2025-05-12T13:29:09.771360380Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"41616801\" in 305.054649ms" May 12 13:29:09.771427 containerd[1500]: time="2025-05-12T13:29:09.771404101Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\"" May 12 13:29:09.773756 containerd[1500]: time="2025-05-12T13:29:09.773449504Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\"" May 12 13:29:09.774314 containerd[1500]: time="2025-05-12T13:29:09.774262322Z" level=info msg="CreateContainer within sandbox \"884389e5f2ee352f34c68abafabf37a8340e7298e197461fb8b0579d2213954d\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 12 13:29:09.782986 containerd[1500]: time="2025-05-12T13:29:09.782265253Z" level=info msg="Container b01b2cfeb6fd6657a93f8b536cabd6005875d0b56df4a5ffe516462a454c19e3: CDI devices from CRI Config.CDIDevices: []" May 12 13:29:09.789121 containerd[1500]: time="2025-05-12T13:29:09.789075759Z" level=info msg="CreateContainer within sandbox \"884389e5f2ee352f34c68abafabf37a8340e7298e197461fb8b0579d2213954d\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"b01b2cfeb6fd6657a93f8b536cabd6005875d0b56df4a5ffe516462a454c19e3\"" May 12 13:29:09.789637 containerd[1500]: time="2025-05-12T13:29:09.789603650Z" level=info msg="StartContainer for \"b01b2cfeb6fd6657a93f8b536cabd6005875d0b56df4a5ffe516462a454c19e3\"" May 12 13:29:09.790584 containerd[1500]: time="2025-05-12T13:29:09.790540790Z" level=info msg="connecting to shim b01b2cfeb6fd6657a93f8b536cabd6005875d0b56df4a5ffe516462a454c19e3" address="unix:///run/containerd/s/11236a4052718efc7fde3e8d2c7bbdbff4ee95f9358799b61e308dc1cf75669f" protocol=ttrpc version=3 May 12 13:29:09.812128 systemd[1]: Started cri-containerd-b01b2cfeb6fd6657a93f8b536cabd6005875d0b56df4a5ffe516462a454c19e3.scope - libcontainer container b01b2cfeb6fd6657a93f8b536cabd6005875d0b56df4a5ffe516462a454c19e3. May 12 13:29:09.858005 containerd[1500]: time="2025-05-12T13:29:09.857935433Z" level=info msg="StartContainer for \"b01b2cfeb6fd6657a93f8b536cabd6005875d0b56df4a5ffe516462a454c19e3\" returns successfully" May 12 13:29:10.493129 kubelet[2734]: I0512 13:29:10.492481 2734 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5599775c9-pcf4c" podStartSLOduration=24.294309128 podStartE2EDuration="28.492460321s" podCreationTimestamp="2025-05-12 13:28:42 +0000 UTC" firstStartedPulling="2025-05-12 13:29:05.573923482 +0000 UTC m=+46.426297055" lastFinishedPulling="2025-05-12 13:29:09.772074635 +0000 UTC m=+50.624448248" observedRunningTime="2025-05-12 13:29:10.491610863 +0000 UTC m=+51.343984476" watchObservedRunningTime="2025-05-12 13:29:10.492460321 +0000 UTC m=+51.344833934" May 12 13:29:10.509144 kubelet[2734]: I0512 13:29:10.508841 2734 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-565df67cbb-w69v8" podStartSLOduration=24.556882682 podStartE2EDuration="28.508822905s" podCreationTimestamp="2025-05-12 13:28:42 +0000 UTC" firstStartedPulling="2025-05-12 13:29:05.514230144 +0000 UTC m=+46.366603757" lastFinishedPulling="2025-05-12 13:29:09.466170367 +0000 UTC m=+50.318543980" observedRunningTime="2025-05-12 13:29:10.507793644 +0000 UTC m=+51.360167257" watchObservedRunningTime="2025-05-12 13:29:10.508822905 +0000 UTC m=+51.361196518" May 12 13:29:10.939148 containerd[1500]: time="2025-05-12T13:29:10.939036042Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 13:29:10.940387 containerd[1500]: time="2025-05-12T13:29:10.940324749Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3: active requests=0, bytes read=13124299" May 12 13:29:10.942351 containerd[1500]: time="2025-05-12T13:29:10.941024923Z" level=info msg="ImageCreate event name:\"sha256:a91b1f00752edc175f270a01b33683fa80818734aa2274388785eaf3364315dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 13:29:10.943705 containerd[1500]: time="2025-05-12T13:29:10.943658059Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 13:29:10.944293 containerd[1500]: time="2025-05-12T13:29:10.944264272Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" with image id \"sha256:a91b1f00752edc175f270a01b33683fa80818734aa2274388785eaf3364315dc\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\", size \"14493433\" in 1.170690524s" May 12 13:29:10.944467 containerd[1500]: time="2025-05-12T13:29:10.944373394Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" returns image reference \"sha256:a91b1f00752edc175f270a01b33683fa80818734aa2274388785eaf3364315dc\"" May 12 13:29:10.945491 containerd[1500]: time="2025-05-12T13:29:10.945384695Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" May 12 13:29:10.948705 containerd[1500]: time="2025-05-12T13:29:10.948673764Z" level=info msg="CreateContainer within sandbox \"eecde385363eba38527ccd001bc90f36458087c2c5d960f73a3cbd9ac430d33e\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" May 12 13:29:10.958561 containerd[1500]: time="2025-05-12T13:29:10.958523012Z" level=info msg="Container 340eb4df9539d835e6c80d982abb44e9983b5b8f35d632961c10a55cdc66a3f5: CDI devices from CRI Config.CDIDevices: []" May 12 13:29:10.973183 containerd[1500]: time="2025-05-12T13:29:10.973130279Z" level=info msg="CreateContainer within sandbox \"eecde385363eba38527ccd001bc90f36458087c2c5d960f73a3cbd9ac430d33e\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"340eb4df9539d835e6c80d982abb44e9983b5b8f35d632961c10a55cdc66a3f5\"" May 12 13:29:10.974108 containerd[1500]: time="2025-05-12T13:29:10.974043498Z" level=info msg="StartContainer for \"340eb4df9539d835e6c80d982abb44e9983b5b8f35d632961c10a55cdc66a3f5\"" May 12 13:29:10.976399 containerd[1500]: time="2025-05-12T13:29:10.976371987Z" level=info msg="connecting to shim 340eb4df9539d835e6c80d982abb44e9983b5b8f35d632961c10a55cdc66a3f5" address="unix:///run/containerd/s/2ba4cd4f511d011b36cb0517af1dd11d826fb8bda00b6299353ba3bd3d8026b9" protocol=ttrpc version=3 May 12 13:29:11.007265 systemd[1]: Started cri-containerd-340eb4df9539d835e6c80d982abb44e9983b5b8f35d632961c10a55cdc66a3f5.scope - libcontainer container 340eb4df9539d835e6c80d982abb44e9983b5b8f35d632961c10a55cdc66a3f5. May 12 13:29:11.068311 containerd[1500]: time="2025-05-12T13:29:11.068262339Z" level=info msg="StartContainer for \"340eb4df9539d835e6c80d982abb44e9983b5b8f35d632961c10a55cdc66a3f5\" returns successfully" May 12 13:29:11.323150 kubelet[2734]: I0512 13:29:11.323103 2734 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 May 12 13:29:11.333661 containerd[1500]: time="2025-05-12T13:29:11.333622357Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 13:29:11.333756 kubelet[2734]: I0512 13:29:11.333713 2734 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock May 12 13:29:11.335644 containerd[1500]: time="2025-05-12T13:29:11.335024066Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=77" May 12 13:29:11.336809 containerd[1500]: time="2025-05-12T13:29:11.336778903Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"41616801\" in 391.334526ms" May 12 13:29:11.336867 containerd[1500]: time="2025-05-12T13:29:11.336822183Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\"" May 12 13:29:11.338746 containerd[1500]: time="2025-05-12T13:29:11.338716063Z" level=info msg="CreateContainer within sandbox \"19b15be64a38b37c84489627a4a79a55db7ff9846d31361c9cf24f91c3f1c821\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 12 13:29:11.345714 containerd[1500]: time="2025-05-12T13:29:11.345439522Z" level=info msg="Container 82d742b4ee731199b4d98bee8b12259115ce96131b15d1c6577242033f7e2f79: CDI devices from CRI Config.CDIDevices: []" May 12 13:29:11.353141 containerd[1500]: time="2025-05-12T13:29:11.353052600Z" level=info msg="CreateContainer within sandbox \"19b15be64a38b37c84489627a4a79a55db7ff9846d31361c9cf24f91c3f1c821\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"82d742b4ee731199b4d98bee8b12259115ce96131b15d1c6577242033f7e2f79\"" May 12 13:29:11.353784 containerd[1500]: time="2025-05-12T13:29:11.353744654Z" level=info msg="StartContainer for \"82d742b4ee731199b4d98bee8b12259115ce96131b15d1c6577242033f7e2f79\"" May 12 13:29:11.354774 containerd[1500]: time="2025-05-12T13:29:11.354745595Z" level=info msg="connecting to shim 82d742b4ee731199b4d98bee8b12259115ce96131b15d1c6577242033f7e2f79" address="unix:///run/containerd/s/e1bbebcf54b0260f453b4d84be4f206dd14b3182c67c14d7d5d763cdc6dfe274" protocol=ttrpc version=3 May 12 13:29:11.382129 systemd[1]: Started cri-containerd-82d742b4ee731199b4d98bee8b12259115ce96131b15d1c6577242033f7e2f79.scope - libcontainer container 82d742b4ee731199b4d98bee8b12259115ce96131b15d1c6577242033f7e2f79. May 12 13:29:11.422311 containerd[1500]: time="2025-05-12T13:29:11.422273634Z" level=info msg="StartContainer for \"82d742b4ee731199b4d98bee8b12259115ce96131b15d1c6577242033f7e2f79\" returns successfully" May 12 13:29:11.504565 kubelet[2734]: I0512 13:29:11.503082 2734 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 12 13:29:11.504565 kubelet[2734]: I0512 13:29:11.503790 2734 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 12 13:29:11.512359 kubelet[2734]: I0512 13:29:11.512111 2734 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-rdgcp" podStartSLOduration=23.056840178 podStartE2EDuration="29.512098255s" podCreationTimestamp="2025-05-12 13:28:42 +0000 UTC" firstStartedPulling="2025-05-12 13:29:04.489904494 +0000 UTC m=+45.342278107" lastFinishedPulling="2025-05-12 13:29:10.945162571 +0000 UTC m=+51.797536184" observedRunningTime="2025-05-12 13:29:11.511364 +0000 UTC m=+52.363737613" watchObservedRunningTime="2025-05-12 13:29:11.512098255 +0000 UTC m=+52.364471868" May 12 13:29:11.528287 kubelet[2734]: I0512 13:29:11.528231 2734 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5599775c9-c8whr" podStartSLOduration=24.642048315 podStartE2EDuration="29.528212589s" podCreationTimestamp="2025-05-12 13:28:42 +0000 UTC" firstStartedPulling="2025-05-12 13:29:06.4511644 +0000 UTC m=+47.303538013" lastFinishedPulling="2025-05-12 13:29:11.337328674 +0000 UTC m=+52.189702287" observedRunningTime="2025-05-12 13:29:11.527488654 +0000 UTC m=+52.379862227" watchObservedRunningTime="2025-05-12 13:29:11.528212589 +0000 UTC m=+52.380586202" May 12 13:29:14.704578 systemd[1]: Started sshd@17-10.0.0.76:22-10.0.0.1:34402.service - OpenSSH per-connection server daemon (10.0.0.1:34402). May 12 13:29:14.783436 sshd[5160]: Accepted publickey for core from 10.0.0.1 port 34402 ssh2: RSA SHA256:jEPoW5jmVqQGUqKP3XswdpHQkuwhsPJWJAB8YbEjhZ8 May 12 13:29:14.784904 sshd-session[5160]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 12 13:29:14.789016 systemd-logind[1472]: New session 18 of user core. May 12 13:29:14.797102 systemd[1]: Started session-18.scope - Session 18 of User core. May 12 13:29:15.001481 sshd[5162]: Connection closed by 10.0.0.1 port 34402 May 12 13:29:15.002218 sshd-session[5160]: pam_unix(sshd:session): session closed for user core May 12 13:29:15.005615 systemd[1]: sshd@17-10.0.0.76:22-10.0.0.1:34402.service: Deactivated successfully. May 12 13:29:15.007738 systemd[1]: session-18.scope: Deactivated successfully. May 12 13:29:15.010600 systemd-logind[1472]: Session 18 logged out. Waiting for processes to exit. May 12 13:29:15.011534 systemd-logind[1472]: Removed session 18. May 12 13:29:20.014500 systemd[1]: Started sshd@18-10.0.0.76:22-10.0.0.1:34410.service - OpenSSH per-connection server daemon (10.0.0.1:34410). May 12 13:29:20.063640 sshd[5190]: Accepted publickey for core from 10.0.0.1 port 34410 ssh2: RSA SHA256:jEPoW5jmVqQGUqKP3XswdpHQkuwhsPJWJAB8YbEjhZ8 May 12 13:29:20.064942 sshd-session[5190]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 12 13:29:20.068970 systemd-logind[1472]: New session 19 of user core. May 12 13:29:20.080111 systemd[1]: Started session-19.scope - Session 19 of User core. May 12 13:29:20.218831 sshd[5192]: Connection closed by 10.0.0.1 port 34410 May 12 13:29:20.219315 sshd-session[5190]: pam_unix(sshd:session): session closed for user core May 12 13:29:20.223605 systemd-logind[1472]: Session 19 logged out. Waiting for processes to exit. May 12 13:29:20.223844 systemd[1]: sshd@18-10.0.0.76:22-10.0.0.1:34410.service: Deactivated successfully. May 12 13:29:20.226208 systemd[1]: session-19.scope: Deactivated successfully. May 12 13:29:20.227148 systemd-logind[1472]: Removed session 19. May 12 13:29:20.852675 containerd[1500]: time="2025-05-12T13:29:20.852622120Z" level=info msg="TaskExit event in podsandbox handler container_id:\"82e73632afb1eda6d8382e767d7b22521b53cd07af472126bf9aa4803a7912fd\" id:\"3ab0914e89265536202e77f8118f66815bae183fcbc5cf72e675d70b7a203cfb\" pid:5217 exited_at:{seconds:1747056560 nanos:851253575}" May 12 13:29:22.395511 kubelet[2734]: I0512 13:29:22.395389 2734 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 12 13:29:22.438422 kubelet[2734]: I0512 13:29:22.438371 2734 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 12 13:29:22.443701 containerd[1500]: time="2025-05-12T13:29:22.443211812Z" level=info msg="StopContainer for \"b01b2cfeb6fd6657a93f8b536cabd6005875d0b56df4a5ffe516462a454c19e3\" with timeout 30 (s)" May 12 13:29:22.445095 containerd[1500]: time="2025-05-12T13:29:22.444595797Z" level=info msg="Stop container \"b01b2cfeb6fd6657a93f8b536cabd6005875d0b56df4a5ffe516462a454c19e3\" with signal terminated" May 12 13:29:22.481450 systemd[1]: cri-containerd-b01b2cfeb6fd6657a93f8b536cabd6005875d0b56df4a5ffe516462a454c19e3.scope: Deactivated successfully. May 12 13:29:22.482087 systemd[1]: cri-containerd-b01b2cfeb6fd6657a93f8b536cabd6005875d0b56df4a5ffe516462a454c19e3.scope: Consumed 1.365s CPU time, 39.6M memory peak. May 12 13:29:22.492306 containerd[1500]: time="2025-05-12T13:29:22.492264464Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b01b2cfeb6fd6657a93f8b536cabd6005875d0b56df4a5ffe516462a454c19e3\" id:\"b01b2cfeb6fd6657a93f8b536cabd6005875d0b56df4a5ffe516462a454c19e3\" pid:5048 exit_status:1 exited_at:{seconds:1747056562 nanos:491885297}" May 12 13:29:22.494882 containerd[1500]: time="2025-05-12T13:29:22.494843671Z" level=info msg="received exit event container_id:\"b01b2cfeb6fd6657a93f8b536cabd6005875d0b56df4a5ffe516462a454c19e3\" id:\"b01b2cfeb6fd6657a93f8b536cabd6005875d0b56df4a5ffe516462a454c19e3\" pid:5048 exit_status:1 exited_at:{seconds:1747056562 nanos:491885297}" May 12 13:29:22.514867 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b01b2cfeb6fd6657a93f8b536cabd6005875d0b56df4a5ffe516462a454c19e3-rootfs.mount: Deactivated successfully. May 12 13:29:22.522282 containerd[1500]: time="2025-05-12T13:29:22.522245329Z" level=info msg="StopContainer for \"b01b2cfeb6fd6657a93f8b536cabd6005875d0b56df4a5ffe516462a454c19e3\" returns successfully" May 12 13:29:22.525560 containerd[1500]: time="2025-05-12T13:29:22.525140141Z" level=info msg="StopPodSandbox for \"884389e5f2ee352f34c68abafabf37a8340e7298e197461fb8b0579d2213954d\"" May 12 13:29:22.534245 containerd[1500]: time="2025-05-12T13:29:22.534198746Z" level=info msg="Container to stop \"b01b2cfeb6fd6657a93f8b536cabd6005875d0b56df4a5ffe516462a454c19e3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 12 13:29:22.542392 systemd[1]: cri-containerd-884389e5f2ee352f34c68abafabf37a8340e7298e197461fb8b0579d2213954d.scope: Deactivated successfully. May 12 13:29:22.543759 containerd[1500]: time="2025-05-12T13:29:22.543724599Z" level=info msg="TaskExit event in podsandbox handler container_id:\"884389e5f2ee352f34c68abafabf37a8340e7298e197461fb8b0579d2213954d\" id:\"884389e5f2ee352f34c68abafabf37a8340e7298e197461fb8b0579d2213954d\" pid:4660 exit_status:137 exited_at:{seconds:1747056562 nanos:543313752}" May 12 13:29:22.565832 containerd[1500]: time="2025-05-12T13:29:22.565774240Z" level=info msg="shim disconnected" id=884389e5f2ee352f34c68abafabf37a8340e7298e197461fb8b0579d2213954d namespace=k8s.io May 12 13:29:22.565832 containerd[1500]: time="2025-05-12T13:29:22.565805841Z" level=warning msg="cleaning up after shim disconnected" id=884389e5f2ee352f34c68abafabf37a8340e7298e197461fb8b0579d2213954d namespace=k8s.io May 12 13:29:22.565832 containerd[1500]: time="2025-05-12T13:29:22.565835961Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 12 13:29:22.567185 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-884389e5f2ee352f34c68abafabf37a8340e7298e197461fb8b0579d2213954d-rootfs.mount: Deactivated successfully. May 12 13:29:22.614659 containerd[1500]: time="2025-05-12T13:29:22.614601488Z" level=info msg="received exit event sandbox_id:\"884389e5f2ee352f34c68abafabf37a8340e7298e197461fb8b0579d2213954d\" exit_status:137 exited_at:{seconds:1747056562 nanos:543313752}" May 12 13:29:22.616971 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-884389e5f2ee352f34c68abafabf37a8340e7298e197461fb8b0579d2213954d-shm.mount: Deactivated successfully. May 12 13:29:22.662970 systemd-networkd[1407]: cali83085989889: Link DOWN May 12 13:29:22.663025 systemd-networkd[1407]: cali83085989889: Lost carrier May 12 13:29:22.742741 containerd[1500]: 2025-05-12 13:29:22.659 [INFO][5305] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="884389e5f2ee352f34c68abafabf37a8340e7298e197461fb8b0579d2213954d" May 12 13:29:22.742741 containerd[1500]: 2025-05-12 13:29:22.659 [INFO][5305] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="884389e5f2ee352f34c68abafabf37a8340e7298e197461fb8b0579d2213954d" iface="eth0" netns="/var/run/netns/cni-723181cf-9fdd-e8cf-385c-c7a21cbdcb0d" May 12 13:29:22.742741 containerd[1500]: 2025-05-12 13:29:22.660 [INFO][5305] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="884389e5f2ee352f34c68abafabf37a8340e7298e197461fb8b0579d2213954d" iface="eth0" netns="/var/run/netns/cni-723181cf-9fdd-e8cf-385c-c7a21cbdcb0d" May 12 13:29:22.742741 containerd[1500]: 2025-05-12 13:29:22.668 [INFO][5305] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="884389e5f2ee352f34c68abafabf37a8340e7298e197461fb8b0579d2213954d" after=8.033146ms iface="eth0" netns="/var/run/netns/cni-723181cf-9fdd-e8cf-385c-c7a21cbdcb0d" May 12 13:29:22.742741 containerd[1500]: 2025-05-12 13:29:22.670 [INFO][5305] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="884389e5f2ee352f34c68abafabf37a8340e7298e197461fb8b0579d2213954d" May 12 13:29:22.742741 containerd[1500]: 2025-05-12 13:29:22.670 [INFO][5305] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="884389e5f2ee352f34c68abafabf37a8340e7298e197461fb8b0579d2213954d" May 12 13:29:22.742741 containerd[1500]: 2025-05-12 13:29:22.692 [INFO][5319] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="884389e5f2ee352f34c68abafabf37a8340e7298e197461fb8b0579d2213954d" HandleID="k8s-pod-network.884389e5f2ee352f34c68abafabf37a8340e7298e197461fb8b0579d2213954d" Workload="localhost-k8s-calico--apiserver--5599775c9--pcf4c-eth0" May 12 13:29:22.742741 containerd[1500]: 2025-05-12 13:29:22.692 [INFO][5319] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 12 13:29:22.742741 containerd[1500]: 2025-05-12 13:29:22.692 [INFO][5319] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 12 13:29:22.742741 containerd[1500]: 2025-05-12 13:29:22.736 [INFO][5319] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="884389e5f2ee352f34c68abafabf37a8340e7298e197461fb8b0579d2213954d" HandleID="k8s-pod-network.884389e5f2ee352f34c68abafabf37a8340e7298e197461fb8b0579d2213954d" Workload="localhost-k8s-calico--apiserver--5599775c9--pcf4c-eth0" May 12 13:29:22.742741 containerd[1500]: 2025-05-12 13:29:22.737 [INFO][5319] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="884389e5f2ee352f34c68abafabf37a8340e7298e197461fb8b0579d2213954d" HandleID="k8s-pod-network.884389e5f2ee352f34c68abafabf37a8340e7298e197461fb8b0579d2213954d" Workload="localhost-k8s-calico--apiserver--5599775c9--pcf4c-eth0" May 12 13:29:22.742741 containerd[1500]: 2025-05-12 13:29:22.738 [INFO][5319] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 12 13:29:22.742741 containerd[1500]: 2025-05-12 13:29:22.741 [INFO][5305] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="884389e5f2ee352f34c68abafabf37a8340e7298e197461fb8b0579d2213954d" May 12 13:29:22.743425 containerd[1500]: time="2025-05-12T13:29:22.743024863Z" level=info msg="TearDown network for sandbox \"884389e5f2ee352f34c68abafabf37a8340e7298e197461fb8b0579d2213954d\" successfully" May 12 13:29:22.743425 containerd[1500]: time="2025-05-12T13:29:22.743050783Z" level=info msg="StopPodSandbox for \"884389e5f2ee352f34c68abafabf37a8340e7298e197461fb8b0579d2213954d\" returns successfully" May 12 13:29:22.744916 systemd[1]: run-netns-cni\x2d723181cf\x2d9fdd\x2de8cf\x2d385c\x2dc7a21cbdcb0d.mount: Deactivated successfully. May 12 13:29:22.840844 kubelet[2734]: I0512 13:29:22.840784 2734 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/2d751f48-caa2-4e41-89a9-f6a5fe02729e-calico-apiserver-certs\") pod \"2d751f48-caa2-4e41-89a9-f6a5fe02729e\" (UID: \"2d751f48-caa2-4e41-89a9-f6a5fe02729e\") " May 12 13:29:22.841294 kubelet[2734]: I0512 13:29:22.841271 2734 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ldxqn\" (UniqueName: \"kubernetes.io/projected/2d751f48-caa2-4e41-89a9-f6a5fe02729e-kube-api-access-ldxqn\") pod \"2d751f48-caa2-4e41-89a9-f6a5fe02729e\" (UID: \"2d751f48-caa2-4e41-89a9-f6a5fe02729e\") " May 12 13:29:22.845711 systemd[1]: var-lib-kubelet-pods-2d751f48\x2dcaa2\x2d4e41\x2d89a9\x2df6a5fe02729e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dldxqn.mount: Deactivated successfully. May 12 13:29:22.845830 systemd[1]: var-lib-kubelet-pods-2d751f48\x2dcaa2\x2d4e41\x2d89a9\x2df6a5fe02729e-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. May 12 13:29:22.846721 kubelet[2734]: I0512 13:29:22.846644 2734 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d751f48-caa2-4e41-89a9-f6a5fe02729e-kube-api-access-ldxqn" (OuterVolumeSpecName: "kube-api-access-ldxqn") pod "2d751f48-caa2-4e41-89a9-f6a5fe02729e" (UID: "2d751f48-caa2-4e41-89a9-f6a5fe02729e"). InnerVolumeSpecName "kube-api-access-ldxqn". PluginName "kubernetes.io/projected", VolumeGidValue "" May 12 13:29:22.847656 kubelet[2734]: I0512 13:29:22.847530 2734 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d751f48-caa2-4e41-89a9-f6a5fe02729e-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "2d751f48-caa2-4e41-89a9-f6a5fe02729e" (UID: "2d751f48-caa2-4e41-89a9-f6a5fe02729e"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" May 12 13:29:22.942481 kubelet[2734]: I0512 13:29:22.942259 2734 reconciler_common.go:289] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/2d751f48-caa2-4e41-89a9-f6a5fe02729e-calico-apiserver-certs\") on node \"localhost\" DevicePath \"\"" May 12 13:29:22.942481 kubelet[2734]: I0512 13:29:22.942299 2734 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-ldxqn\" (UniqueName: \"kubernetes.io/projected/2d751f48-caa2-4e41-89a9-f6a5fe02729e-kube-api-access-ldxqn\") on node \"localhost\" DevicePath \"\"" May 12 13:29:23.240151 systemd[1]: Removed slice kubepods-besteffort-pod2d751f48_caa2_4e41_89a9_f6a5fe02729e.slice - libcontainer container kubepods-besteffort-pod2d751f48_caa2_4e41_89a9_f6a5fe02729e.slice. May 12 13:29:23.240258 systemd[1]: kubepods-besteffort-pod2d751f48_caa2_4e41_89a9_f6a5fe02729e.slice: Consumed 1.382s CPU time, 39.8M memory peak. May 12 13:29:23.529982 kubelet[2734]: I0512 13:29:23.529863 2734 scope.go:117] "RemoveContainer" containerID="b01b2cfeb6fd6657a93f8b536cabd6005875d0b56df4a5ffe516462a454c19e3" May 12 13:29:23.533028 containerd[1500]: time="2025-05-12T13:29:23.532991463Z" level=info msg="RemoveContainer for \"b01b2cfeb6fd6657a93f8b536cabd6005875d0b56df4a5ffe516462a454c19e3\"" May 12 13:29:23.551691 containerd[1500]: time="2025-05-12T13:29:23.551649479Z" level=info msg="RemoveContainer for \"b01b2cfeb6fd6657a93f8b536cabd6005875d0b56df4a5ffe516462a454c19e3\" returns successfully" May 12 13:29:25.231854 kubelet[2734]: I0512 13:29:25.231810 2734 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d751f48-caa2-4e41-89a9-f6a5fe02729e" path="/var/lib/kubelet/pods/2d751f48-caa2-4e41-89a9-f6a5fe02729e/volumes" May 12 13:29:25.238510 systemd[1]: Started sshd@19-10.0.0.76:22-10.0.0.1:55562.service - OpenSSH per-connection server daemon (10.0.0.1:55562). May 12 13:29:25.293366 sshd[5335]: Accepted publickey for core from 10.0.0.1 port 55562 ssh2: RSA SHA256:jEPoW5jmVqQGUqKP3XswdpHQkuwhsPJWJAB8YbEjhZ8 May 12 13:29:25.294697 sshd-session[5335]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 12 13:29:25.298813 systemd-logind[1472]: New session 20 of user core. May 12 13:29:25.310150 systemd[1]: Started session-20.scope - Session 20 of User core. May 12 13:29:25.461355 sshd[5337]: Connection closed by 10.0.0.1 port 55562 May 12 13:29:25.461863 sshd-session[5335]: pam_unix(sshd:session): session closed for user core May 12 13:29:25.466669 systemd[1]: sshd@19-10.0.0.76:22-10.0.0.1:55562.service: Deactivated successfully. May 12 13:29:25.469527 systemd[1]: session-20.scope: Deactivated successfully. May 12 13:29:25.470976 systemd-logind[1472]: Session 20 logged out. Waiting for processes to exit. May 12 13:29:25.472102 systemd-logind[1472]: Removed session 20. May 12 13:29:28.691760 containerd[1500]: time="2025-05-12T13:29:28.691665440Z" level=info msg="StopContainer for \"848afd13fe48b7c39b8768c9ea0e9dde20592bcfcb4211594c87f5d37d3f12dd\" with timeout 300 (s)" May 12 13:29:28.693136 containerd[1500]: time="2025-05-12T13:29:28.693098052Z" level=info msg="Stop container \"848afd13fe48b7c39b8768c9ea0e9dde20592bcfcb4211594c87f5d37d3f12dd\" with signal terminated" May 12 13:29:28.789154 containerd[1500]: time="2025-05-12T13:29:28.788929521Z" level=info msg="StopContainer for \"82e73632afb1eda6d8382e767d7b22521b53cd07af472126bf9aa4803a7912fd\" with timeout 30 (s)" May 12 13:29:28.789399 containerd[1500]: time="2025-05-12T13:29:28.789369645Z" level=info msg="Stop container \"82e73632afb1eda6d8382e767d7b22521b53cd07af472126bf9aa4803a7912fd\" with signal terminated" May 12 13:29:28.853320 systemd[1]: cri-containerd-82e73632afb1eda6d8382e767d7b22521b53cd07af472126bf9aa4803a7912fd.scope: Deactivated successfully. May 12 13:29:28.861555 containerd[1500]: time="2025-05-12T13:29:28.861118916Z" level=info msg="received exit event container_id:\"82e73632afb1eda6d8382e767d7b22521b53cd07af472126bf9aa4803a7912fd\" id:\"82e73632afb1eda6d8382e767d7b22521b53cd07af472126bf9aa4803a7912fd\" pid:4904 exit_status:2 exited_at:{seconds:1747056568 nanos:860871794}" May 12 13:29:28.861555 containerd[1500]: time="2025-05-12T13:29:28.861278637Z" level=info msg="TaskExit event in podsandbox handler container_id:\"82e73632afb1eda6d8382e767d7b22521b53cd07af472126bf9aa4803a7912fd\" id:\"82e73632afb1eda6d8382e767d7b22521b53cd07af472126bf9aa4803a7912fd\" pid:4904 exit_status:2 exited_at:{seconds:1747056568 nanos:860871794}" May 12 13:29:28.888783 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-82e73632afb1eda6d8382e767d7b22521b53cd07af472126bf9aa4803a7912fd-rootfs.mount: Deactivated successfully. May 12 13:29:28.916603 containerd[1500]: time="2025-05-12T13:29:28.916550972Z" level=info msg="StopContainer for \"82e73632afb1eda6d8382e767d7b22521b53cd07af472126bf9aa4803a7912fd\" returns successfully" May 12 13:29:28.918764 containerd[1500]: time="2025-05-12T13:29:28.918707030Z" level=info msg="StopPodSandbox for \"33b7b13c1f42b4db530ef006755530f39cd5e74720f5b96c5d656070585a20f7\"" May 12 13:29:28.918852 containerd[1500]: time="2025-05-12T13:29:28.918802391Z" level=info msg="Container to stop \"82e73632afb1eda6d8382e767d7b22521b53cd07af472126bf9aa4803a7912fd\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 12 13:29:28.926572 systemd[1]: cri-containerd-33b7b13c1f42b4db530ef006755530f39cd5e74720f5b96c5d656070585a20f7.scope: Deactivated successfully. May 12 13:29:28.929667 containerd[1500]: time="2025-05-12T13:29:28.929288637Z" level=info msg="TaskExit event in podsandbox handler container_id:\"33b7b13c1f42b4db530ef006755530f39cd5e74720f5b96c5d656070585a20f7\" id:\"33b7b13c1f42b4db530ef006755530f39cd5e74720f5b96c5d656070585a20f7\" pid:4421 exit_status:137 exited_at:{seconds:1747056568 nanos:928596032}" May 12 13:29:28.962657 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-33b7b13c1f42b4db530ef006755530f39cd5e74720f5b96c5d656070585a20f7-rootfs.mount: Deactivated successfully. May 12 13:29:28.963438 containerd[1500]: time="2025-05-12T13:29:28.963321198Z" level=info msg="shim disconnected" id=33b7b13c1f42b4db530ef006755530f39cd5e74720f5b96c5d656070585a20f7 namespace=k8s.io May 12 13:29:28.963438 containerd[1500]: time="2025-05-12T13:29:28.963357758Z" level=warning msg="cleaning up after shim disconnected" id=33b7b13c1f42b4db530ef006755530f39cd5e74720f5b96c5d656070585a20f7 namespace=k8s.io May 12 13:29:28.963438 containerd[1500]: time="2025-05-12T13:29:28.963395598Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 12 13:29:28.993280 containerd[1500]: time="2025-05-12T13:29:28.989593094Z" level=info msg="received exit event sandbox_id:\"33b7b13c1f42b4db530ef006755530f39cd5e74720f5b96c5d656070585a20f7\" exit_status:137 exited_at:{seconds:1747056568 nanos:928596032}" May 12 13:29:28.993653 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-33b7b13c1f42b4db530ef006755530f39cd5e74720f5b96c5d656070585a20f7-shm.mount: Deactivated successfully. May 12 13:29:28.996556 containerd[1500]: time="2025-05-12T13:29:28.996414030Z" level=error msg="Failed to handle event container_id:\"33b7b13c1f42b4db530ef006755530f39cd5e74720f5b96c5d656070585a20f7\" id:\"33b7b13c1f42b4db530ef006755530f39cd5e74720f5b96c5d656070585a20f7\" pid:4421 exit_status:137 exited_at:{seconds:1747056568 nanos:928596032} for 33b7b13c1f42b4db530ef006755530f39cd5e74720f5b96c5d656070585a20f7" error="failed to handle container TaskExit event: failed to stop sandbox: failed to delete task: ttrpc: closed" May 12 13:29:29.050526 systemd-networkd[1407]: calidb40e5bc30b: Link DOWN May 12 13:29:29.050534 systemd-networkd[1407]: calidb40e5bc30b: Lost carrier May 12 13:29:29.144987 containerd[1500]: 2025-05-12 13:29:29.049 [INFO][5436] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="33b7b13c1f42b4db530ef006755530f39cd5e74720f5b96c5d656070585a20f7" May 12 13:29:29.144987 containerd[1500]: 2025-05-12 13:29:29.049 [INFO][5436] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="33b7b13c1f42b4db530ef006755530f39cd5e74720f5b96c5d656070585a20f7" iface="eth0" netns="/var/run/netns/cni-bef9d26d-5409-93e7-3d73-c57104b5b684" May 12 13:29:29.144987 containerd[1500]: 2025-05-12 13:29:29.049 [INFO][5436] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="33b7b13c1f42b4db530ef006755530f39cd5e74720f5b96c5d656070585a20f7" iface="eth0" netns="/var/run/netns/cni-bef9d26d-5409-93e7-3d73-c57104b5b684" May 12 13:29:29.144987 containerd[1500]: 2025-05-12 13:29:29.061 [INFO][5436] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="33b7b13c1f42b4db530ef006755530f39cd5e74720f5b96c5d656070585a20f7" after=11.521417ms iface="eth0" netns="/var/run/netns/cni-bef9d26d-5409-93e7-3d73-c57104b5b684" May 12 13:29:29.144987 containerd[1500]: 2025-05-12 13:29:29.061 [INFO][5436] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="33b7b13c1f42b4db530ef006755530f39cd5e74720f5b96c5d656070585a20f7" May 12 13:29:29.144987 containerd[1500]: 2025-05-12 13:29:29.061 [INFO][5436] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="33b7b13c1f42b4db530ef006755530f39cd5e74720f5b96c5d656070585a20f7" May 12 13:29:29.144987 containerd[1500]: 2025-05-12 13:29:29.099 [INFO][5449] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="33b7b13c1f42b4db530ef006755530f39cd5e74720f5b96c5d656070585a20f7" HandleID="k8s-pod-network.33b7b13c1f42b4db530ef006755530f39cd5e74720f5b96c5d656070585a20f7" Workload="localhost-k8s-calico--kube--controllers--6575dcdf85--mqr66-eth0" May 12 13:29:29.144987 containerd[1500]: 2025-05-12 13:29:29.099 [INFO][5449] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 12 13:29:29.144987 containerd[1500]: 2025-05-12 13:29:29.099 [INFO][5449] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 12 13:29:29.144987 containerd[1500]: 2025-05-12 13:29:29.136 [INFO][5449] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="33b7b13c1f42b4db530ef006755530f39cd5e74720f5b96c5d656070585a20f7" HandleID="k8s-pod-network.33b7b13c1f42b4db530ef006755530f39cd5e74720f5b96c5d656070585a20f7" Workload="localhost-k8s-calico--kube--controllers--6575dcdf85--mqr66-eth0" May 12 13:29:29.144987 containerd[1500]: 2025-05-12 13:29:29.137 [INFO][5449] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="33b7b13c1f42b4db530ef006755530f39cd5e74720f5b96c5d656070585a20f7" HandleID="k8s-pod-network.33b7b13c1f42b4db530ef006755530f39cd5e74720f5b96c5d656070585a20f7" Workload="localhost-k8s-calico--kube--controllers--6575dcdf85--mqr66-eth0" May 12 13:29:29.144987 containerd[1500]: 2025-05-12 13:29:29.138 [INFO][5449] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 12 13:29:29.144987 containerd[1500]: 2025-05-12 13:29:29.141 [INFO][5436] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="33b7b13c1f42b4db530ef006755530f39cd5e74720f5b96c5d656070585a20f7" May 12 13:29:29.147329 systemd[1]: run-netns-cni\x2dbef9d26d\x2d5409\x2d93e7\x2d3d73\x2dc57104b5b684.mount: Deactivated successfully. May 12 13:29:29.147751 containerd[1500]: time="2025-05-12T13:29:29.147606546Z" level=info msg="TearDown network for sandbox \"33b7b13c1f42b4db530ef006755530f39cd5e74720f5b96c5d656070585a20f7\" successfully" May 12 13:29:29.147751 containerd[1500]: time="2025-05-12T13:29:29.147639506Z" level=info msg="StopPodSandbox for \"33b7b13c1f42b4db530ef006755530f39cd5e74720f5b96c5d656070585a20f7\" returns successfully" May 12 13:29:29.280842 kubelet[2734]: I0512 13:29:29.280311 2734 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c875fed8-1144-4edd-becd-9872677e567b-tigera-ca-bundle\") pod \"c875fed8-1144-4edd-becd-9872677e567b\" (UID: \"c875fed8-1144-4edd-becd-9872677e567b\") " May 12 13:29:29.280842 kubelet[2734]: I0512 13:29:29.280369 2734 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mzkrl\" (UniqueName: \"kubernetes.io/projected/c875fed8-1144-4edd-becd-9872677e567b-kube-api-access-mzkrl\") pod \"c875fed8-1144-4edd-becd-9872677e567b\" (UID: \"c875fed8-1144-4edd-becd-9872677e567b\") " May 12 13:29:29.284227 kubelet[2734]: I0512 13:29:29.284171 2734 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c875fed8-1144-4edd-becd-9872677e567b-kube-api-access-mzkrl" (OuterVolumeSpecName: "kube-api-access-mzkrl") pod "c875fed8-1144-4edd-becd-9872677e567b" (UID: "c875fed8-1144-4edd-becd-9872677e567b"). InnerVolumeSpecName "kube-api-access-mzkrl". PluginName "kubernetes.io/projected", VolumeGidValue "" May 12 13:29:29.286031 systemd[1]: var-lib-kubelet-pods-c875fed8\x2d1144\x2d4edd\x2dbecd\x2d9872677e567b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dmzkrl.mount: Deactivated successfully. May 12 13:29:29.289517 kubelet[2734]: I0512 13:29:29.289465 2734 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c875fed8-1144-4edd-becd-9872677e567b-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "c875fed8-1144-4edd-becd-9872677e567b" (UID: "c875fed8-1144-4edd-becd-9872677e567b"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 12 13:29:29.381159 kubelet[2734]: I0512 13:29:29.381119 2734 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-mzkrl\" (UniqueName: \"kubernetes.io/projected/c875fed8-1144-4edd-becd-9872677e567b-kube-api-access-mzkrl\") on node \"localhost\" DevicePath \"\"" May 12 13:29:29.381159 kubelet[2734]: I0512 13:29:29.381154 2734 reconciler_common.go:289] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c875fed8-1144-4edd-becd-9872677e567b-tigera-ca-bundle\") on node \"localhost\" DevicePath \"\"" May 12 13:29:29.546684 kubelet[2734]: I0512 13:29:29.546211 2734 scope.go:117] "RemoveContainer" containerID="82e73632afb1eda6d8382e767d7b22521b53cd07af472126bf9aa4803a7912fd" May 12 13:29:29.549669 containerd[1500]: time="2025-05-12T13:29:29.549146336Z" level=info msg="RemoveContainer for \"82e73632afb1eda6d8382e767d7b22521b53cd07af472126bf9aa4803a7912fd\"" May 12 13:29:29.554857 systemd[1]: Removed slice kubepods-besteffort-podc875fed8_1144_4edd_becd_9872677e567b.slice - libcontainer container kubepods-besteffort-podc875fed8_1144_4edd_becd_9872677e567b.slice. May 12 13:29:29.566767 containerd[1500]: time="2025-05-12T13:29:29.566528083Z" level=info msg="RemoveContainer for \"82e73632afb1eda6d8382e767d7b22521b53cd07af472126bf9aa4803a7912fd\" returns successfully" May 12 13:29:29.566941 kubelet[2734]: I0512 13:29:29.566824 2734 scope.go:117] "RemoveContainer" containerID="82e73632afb1eda6d8382e767d7b22521b53cd07af472126bf9aa4803a7912fd" May 12 13:29:29.567202 containerd[1500]: time="2025-05-12T13:29:29.567164048Z" level=error msg="ContainerStatus for \"82e73632afb1eda6d8382e767d7b22521b53cd07af472126bf9aa4803a7912fd\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"82e73632afb1eda6d8382e767d7b22521b53cd07af472126bf9aa4803a7912fd\": not found" May 12 13:29:29.567394 kubelet[2734]: E0512 13:29:29.567370 2734 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"82e73632afb1eda6d8382e767d7b22521b53cd07af472126bf9aa4803a7912fd\": not found" containerID="82e73632afb1eda6d8382e767d7b22521b53cd07af472126bf9aa4803a7912fd" May 12 13:29:29.567496 kubelet[2734]: I0512 13:29:29.567404 2734 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"82e73632afb1eda6d8382e767d7b22521b53cd07af472126bf9aa4803a7912fd"} err="failed to get container status \"82e73632afb1eda6d8382e767d7b22521b53cd07af472126bf9aa4803a7912fd\": rpc error: code = NotFound desc = an error occurred when try to find container \"82e73632afb1eda6d8382e767d7b22521b53cd07af472126bf9aa4803a7912fd\": not found" May 12 13:29:29.593446 kubelet[2734]: I0512 13:29:29.593320 2734 topology_manager.go:215] "Topology Admit Handler" podUID="beccaa68-5eef-497f-a6ac-03e9aa4dcc45" podNamespace="calico-system" podName="calico-kube-controllers-58cc7dd69f-zlcwc" May 12 13:29:29.593676 kubelet[2734]: E0512 13:29:29.593650 2734 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c875fed8-1144-4edd-becd-9872677e567b" containerName="calico-kube-controllers" May 12 13:29:29.593676 kubelet[2734]: E0512 13:29:29.593671 2734 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2d751f48-caa2-4e41-89a9-f6a5fe02729e" containerName="calico-apiserver" May 12 13:29:29.593805 kubelet[2734]: I0512 13:29:29.593701 2734 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d751f48-caa2-4e41-89a9-f6a5fe02729e" containerName="calico-apiserver" May 12 13:29:29.593805 kubelet[2734]: I0512 13:29:29.593711 2734 memory_manager.go:354] "RemoveStaleState removing state" podUID="c875fed8-1144-4edd-becd-9872677e567b" containerName="calico-kube-controllers" May 12 13:29:29.606192 systemd[1]: Created slice kubepods-besteffort-podbeccaa68_5eef_497f_a6ac_03e9aa4dcc45.slice - libcontainer container kubepods-besteffort-podbeccaa68_5eef_497f_a6ac_03e9aa4dcc45.slice. May 12 13:29:29.683137 kubelet[2734]: I0512 13:29:29.682914 2734 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/beccaa68-5eef-497f-a6ac-03e9aa4dcc45-tigera-ca-bundle\") pod \"calico-kube-controllers-58cc7dd69f-zlcwc\" (UID: \"beccaa68-5eef-497f-a6ac-03e9aa4dcc45\") " pod="calico-system/calico-kube-controllers-58cc7dd69f-zlcwc" May 12 13:29:29.683137 kubelet[2734]: I0512 13:29:29.683046 2734 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jp8n4\" (UniqueName: \"kubernetes.io/projected/beccaa68-5eef-497f-a6ac-03e9aa4dcc45-kube-api-access-jp8n4\") pod \"calico-kube-controllers-58cc7dd69f-zlcwc\" (UID: \"beccaa68-5eef-497f-a6ac-03e9aa4dcc45\") " pod="calico-system/calico-kube-controllers-58cc7dd69f-zlcwc" May 12 13:29:29.888080 systemd[1]: var-lib-kubelet-pods-c875fed8\x2d1144\x2d4edd\x2dbecd\x2d9872677e567b-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dkube\x2dcontrollers-1.mount: Deactivated successfully. May 12 13:29:29.911573 containerd[1500]: time="2025-05-12T13:29:29.911321314Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-58cc7dd69f-zlcwc,Uid:beccaa68-5eef-497f-a6ac-03e9aa4dcc45,Namespace:calico-system,Attempt:0,}" May 12 13:29:30.065595 systemd-networkd[1407]: cali6d13c0e2fd4: Link UP May 12 13:29:30.065764 systemd-networkd[1407]: cali6d13c0e2fd4: Gained carrier May 12 13:29:30.082519 containerd[1500]: 2025-05-12 13:29:29.954 [INFO][5469] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--58cc7dd69f--zlcwc-eth0 calico-kube-controllers-58cc7dd69f- calico-system beccaa68-5eef-497f-a6ac-03e9aa4dcc45 1220 0 2025-05-12 13:29:29 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:58cc7dd69f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-58cc7dd69f-zlcwc eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali6d13c0e2fd4 [] []}} ContainerID="ec3c99f7d8163cc90a17fa9c50cf1ef87b3a11392e5fc6867bfc235d057d9f1c" Namespace="calico-system" Pod="calico-kube-controllers-58cc7dd69f-zlcwc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--58cc7dd69f--zlcwc-" May 12 13:29:30.082519 containerd[1500]: 2025-05-12 13:29:29.954 [INFO][5469] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="ec3c99f7d8163cc90a17fa9c50cf1ef87b3a11392e5fc6867bfc235d057d9f1c" Namespace="calico-system" Pod="calico-kube-controllers-58cc7dd69f-zlcwc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--58cc7dd69f--zlcwc-eth0" May 12 13:29:30.082519 containerd[1500]: 2025-05-12 13:29:29.989 [INFO][5484] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ec3c99f7d8163cc90a17fa9c50cf1ef87b3a11392e5fc6867bfc235d057d9f1c" HandleID="k8s-pod-network.ec3c99f7d8163cc90a17fa9c50cf1ef87b3a11392e5fc6867bfc235d057d9f1c" Workload="localhost-k8s-calico--kube--controllers--58cc7dd69f--zlcwc-eth0" May 12 13:29:30.082519 containerd[1500]: 2025-05-12 13:29:30.001 [INFO][5484] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ec3c99f7d8163cc90a17fa9c50cf1ef87b3a11392e5fc6867bfc235d057d9f1c" HandleID="k8s-pod-network.ec3c99f7d8163cc90a17fa9c50cf1ef87b3a11392e5fc6867bfc235d057d9f1c" Workload="localhost-k8s-calico--kube--controllers--58cc7dd69f--zlcwc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400027a370), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-58cc7dd69f-zlcwc", "timestamp":"2025-05-12 13:29:29.989385053 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 12 13:29:30.082519 containerd[1500]: 2025-05-12 13:29:30.001 [INFO][5484] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 12 13:29:30.082519 containerd[1500]: 2025-05-12 13:29:30.001 [INFO][5484] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 12 13:29:30.082519 containerd[1500]: 2025-05-12 13:29:30.001 [INFO][5484] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 12 13:29:30.082519 containerd[1500]: 2025-05-12 13:29:30.005 [INFO][5484] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.ec3c99f7d8163cc90a17fa9c50cf1ef87b3a11392e5fc6867bfc235d057d9f1c" host="localhost" May 12 13:29:30.082519 containerd[1500]: 2025-05-12 13:29:30.017 [INFO][5484] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 12 13:29:30.082519 containerd[1500]: 2025-05-12 13:29:30.026 [INFO][5484] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 12 13:29:30.082519 containerd[1500]: 2025-05-12 13:29:30.029 [INFO][5484] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 12 13:29:30.082519 containerd[1500]: 2025-05-12 13:29:30.035 [INFO][5484] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 12 13:29:30.082519 containerd[1500]: 2025-05-12 13:29:30.035 [INFO][5484] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ec3c99f7d8163cc90a17fa9c50cf1ef87b3a11392e5fc6867bfc235d057d9f1c" host="localhost" May 12 13:29:30.082519 containerd[1500]: 2025-05-12 13:29:30.041 [INFO][5484] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.ec3c99f7d8163cc90a17fa9c50cf1ef87b3a11392e5fc6867bfc235d057d9f1c May 12 13:29:30.082519 containerd[1500]: 2025-05-12 13:29:30.046 [INFO][5484] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ec3c99f7d8163cc90a17fa9c50cf1ef87b3a11392e5fc6867bfc235d057d9f1c" host="localhost" May 12 13:29:30.082519 containerd[1500]: 2025-05-12 13:29:30.059 [INFO][5484] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.ec3c99f7d8163cc90a17fa9c50cf1ef87b3a11392e5fc6867bfc235d057d9f1c" host="localhost" May 12 13:29:30.082519 containerd[1500]: 2025-05-12 13:29:30.059 [INFO][5484] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.ec3c99f7d8163cc90a17fa9c50cf1ef87b3a11392e5fc6867bfc235d057d9f1c" host="localhost" May 12 13:29:30.082519 containerd[1500]: 2025-05-12 13:29:30.059 [INFO][5484] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 12 13:29:30.082519 containerd[1500]: 2025-05-12 13:29:30.059 [INFO][5484] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="ec3c99f7d8163cc90a17fa9c50cf1ef87b3a11392e5fc6867bfc235d057d9f1c" HandleID="k8s-pod-network.ec3c99f7d8163cc90a17fa9c50cf1ef87b3a11392e5fc6867bfc235d057d9f1c" Workload="localhost-k8s-calico--kube--controllers--58cc7dd69f--zlcwc-eth0" May 12 13:29:30.083387 containerd[1500]: 2025-05-12 13:29:30.063 [INFO][5469] cni-plugin/k8s.go 386: Populated endpoint ContainerID="ec3c99f7d8163cc90a17fa9c50cf1ef87b3a11392e5fc6867bfc235d057d9f1c" Namespace="calico-system" Pod="calico-kube-controllers-58cc7dd69f-zlcwc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--58cc7dd69f--zlcwc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--58cc7dd69f--zlcwc-eth0", GenerateName:"calico-kube-controllers-58cc7dd69f-", Namespace:"calico-system", SelfLink:"", UID:"beccaa68-5eef-497f-a6ac-03e9aa4dcc45", ResourceVersion:"1220", Generation:0, CreationTimestamp:time.Date(2025, time.May, 12, 13, 29, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"58cc7dd69f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-58cc7dd69f-zlcwc", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali6d13c0e2fd4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 12 13:29:30.083387 containerd[1500]: 2025-05-12 13:29:30.063 [INFO][5469] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.136/32] ContainerID="ec3c99f7d8163cc90a17fa9c50cf1ef87b3a11392e5fc6867bfc235d057d9f1c" Namespace="calico-system" Pod="calico-kube-controllers-58cc7dd69f-zlcwc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--58cc7dd69f--zlcwc-eth0" May 12 13:29:30.083387 containerd[1500]: 2025-05-12 13:29:30.063 [INFO][5469] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6d13c0e2fd4 ContainerID="ec3c99f7d8163cc90a17fa9c50cf1ef87b3a11392e5fc6867bfc235d057d9f1c" Namespace="calico-system" Pod="calico-kube-controllers-58cc7dd69f-zlcwc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--58cc7dd69f--zlcwc-eth0" May 12 13:29:30.083387 containerd[1500]: 2025-05-12 13:29:30.065 [INFO][5469] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ec3c99f7d8163cc90a17fa9c50cf1ef87b3a11392e5fc6867bfc235d057d9f1c" Namespace="calico-system" Pod="calico-kube-controllers-58cc7dd69f-zlcwc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--58cc7dd69f--zlcwc-eth0" May 12 13:29:30.083387 containerd[1500]: 2025-05-12 13:29:30.065 [INFO][5469] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="ec3c99f7d8163cc90a17fa9c50cf1ef87b3a11392e5fc6867bfc235d057d9f1c" Namespace="calico-system" Pod="calico-kube-controllers-58cc7dd69f-zlcwc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--58cc7dd69f--zlcwc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--58cc7dd69f--zlcwc-eth0", GenerateName:"calico-kube-controllers-58cc7dd69f-", Namespace:"calico-system", SelfLink:"", UID:"beccaa68-5eef-497f-a6ac-03e9aa4dcc45", ResourceVersion:"1220", Generation:0, CreationTimestamp:time.Date(2025, time.May, 12, 13, 29, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"58cc7dd69f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ec3c99f7d8163cc90a17fa9c50cf1ef87b3a11392e5fc6867bfc235d057d9f1c", Pod:"calico-kube-controllers-58cc7dd69f-zlcwc", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali6d13c0e2fd4", MAC:"ee:ca:db:5f:87:1d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 12 13:29:30.083387 containerd[1500]: 2025-05-12 13:29:30.079 [INFO][5469] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="ec3c99f7d8163cc90a17fa9c50cf1ef87b3a11392e5fc6867bfc235d057d9f1c" Namespace="calico-system" Pod="calico-kube-controllers-58cc7dd69f-zlcwc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--58cc7dd69f--zlcwc-eth0" May 12 13:29:30.105145 containerd[1500]: time="2025-05-12T13:29:30.104238643Z" level=info msg="connecting to shim ec3c99f7d8163cc90a17fa9c50cf1ef87b3a11392e5fc6867bfc235d057d9f1c" address="unix:///run/containerd/s/9f7dd6cae02554f2ead743633711418e8264a50c7f12bee90284490bc77d8799" namespace=k8s.io protocol=ttrpc version=3 May 12 13:29:30.131171 systemd[1]: Started cri-containerd-ec3c99f7d8163cc90a17fa9c50cf1ef87b3a11392e5fc6867bfc235d057d9f1c.scope - libcontainer container ec3c99f7d8163cc90a17fa9c50cf1ef87b3a11392e5fc6867bfc235d057d9f1c. May 12 13:29:30.153149 systemd-resolved[1408]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 12 13:29:30.177053 containerd[1500]: time="2025-05-12T13:29:30.176936551Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-58cc7dd69f-zlcwc,Uid:beccaa68-5eef-497f-a6ac-03e9aa4dcc45,Namespace:calico-system,Attempt:0,} returns sandbox id \"ec3c99f7d8163cc90a17fa9c50cf1ef87b3a11392e5fc6867bfc235d057d9f1c\"" May 12 13:29:30.186665 containerd[1500]: time="2025-05-12T13:29:30.186602075Z" level=info msg="CreateContainer within sandbox \"ec3c99f7d8163cc90a17fa9c50cf1ef87b3a11392e5fc6867bfc235d057d9f1c\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" May 12 13:29:30.195611 containerd[1500]: time="2025-05-12T13:29:30.195554032Z" level=info msg="Container c0e4a5b67a2746da20362e4160ddd2d1bb3d4abadb8ad2c0a57736709da49744: CDI devices from CRI Config.CDIDevices: []" May 12 13:29:30.202667 containerd[1500]: time="2025-05-12T13:29:30.202608933Z" level=info msg="CreateContainer within sandbox \"ec3c99f7d8163cc90a17fa9c50cf1ef87b3a11392e5fc6867bfc235d057d9f1c\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"c0e4a5b67a2746da20362e4160ddd2d1bb3d4abadb8ad2c0a57736709da49744\"" May 12 13:29:30.203196 containerd[1500]: time="2025-05-12T13:29:30.203164138Z" level=info msg="StartContainer for \"c0e4a5b67a2746da20362e4160ddd2d1bb3d4abadb8ad2c0a57736709da49744\"" May 12 13:29:30.204807 containerd[1500]: time="2025-05-12T13:29:30.204728191Z" level=info msg="connecting to shim c0e4a5b67a2746da20362e4160ddd2d1bb3d4abadb8ad2c0a57736709da49744" address="unix:///run/containerd/s/9f7dd6cae02554f2ead743633711418e8264a50c7f12bee90284490bc77d8799" protocol=ttrpc version=3 May 12 13:29:30.229200 systemd[1]: Started cri-containerd-c0e4a5b67a2746da20362e4160ddd2d1bb3d4abadb8ad2c0a57736709da49744.scope - libcontainer container c0e4a5b67a2746da20362e4160ddd2d1bb3d4abadb8ad2c0a57736709da49744. May 12 13:29:30.268734 containerd[1500]: time="2025-05-12T13:29:30.268685904Z" level=info msg="StartContainer for \"c0e4a5b67a2746da20362e4160ddd2d1bb3d4abadb8ad2c0a57736709da49744\" returns successfully" May 12 13:29:30.341005 containerd[1500]: time="2025-05-12T13:29:30.340957769Z" level=info msg="TaskExit event in podsandbox handler container_id:\"342ed0f9ae0fca3be8e8ab15baf1e548ed1e2599742bb5f02403fa977a4a3241\" id:\"c19256dd977fa25e0fe8af6838100d1760b82f5bd0c6268683f870cb3b307563\" pid:5598 exit_status:1 exited_at:{seconds:1747056570 nanos:340419724}" May 12 13:29:30.396466 containerd[1500]: time="2025-05-12T13:29:30.396395488Z" level=info msg="TaskExit event in podsandbox handler container_id:\"33b7b13c1f42b4db530ef006755530f39cd5e74720f5b96c5d656070585a20f7\" id:\"33b7b13c1f42b4db530ef006755530f39cd5e74720f5b96c5d656070585a20f7\" pid:4421 exit_status:137 exited_at:{seconds:1747056568 nanos:928596032}" May 12 13:29:30.479905 systemd[1]: Started sshd@20-10.0.0.76:22-10.0.0.1:55568.service - OpenSSH per-connection server daemon (10.0.0.1:55568). May 12 13:29:30.531317 sshd[5613]: Accepted publickey for core from 10.0.0.1 port 55568 ssh2: RSA SHA256:jEPoW5jmVqQGUqKP3XswdpHQkuwhsPJWJAB8YbEjhZ8 May 12 13:29:30.535468 sshd-session[5613]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 12 13:29:30.540997 systemd-logind[1472]: New session 21 of user core. May 12 13:29:30.547154 systemd[1]: Started session-21.scope - Session 21 of User core. May 12 13:29:30.690362 sshd[5615]: Connection closed by 10.0.0.1 port 55568 May 12 13:29:30.691992 sshd-session[5613]: pam_unix(sshd:session): session closed for user core May 12 13:29:30.698057 systemd[1]: sshd@20-10.0.0.76:22-10.0.0.1:55568.service: Deactivated successfully. May 12 13:29:30.698228 systemd-logind[1472]: Session 21 logged out. Waiting for processes to exit. May 12 13:29:30.701657 systemd[1]: session-21.scope: Deactivated successfully. May 12 13:29:30.704085 systemd-logind[1472]: Removed session 21. May 12 13:29:31.231786 kubelet[2734]: I0512 13:29:31.231604 2734 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c875fed8-1144-4edd-becd-9872677e567b" path="/var/lib/kubelet/pods/c875fed8-1144-4edd-becd-9872677e567b/volumes" May 12 13:29:31.600523 containerd[1500]: time="2025-05-12T13:29:31.600283568Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c0e4a5b67a2746da20362e4160ddd2d1bb3d4abadb8ad2c0a57736709da49744\" id:\"07d54e18cd7afe6e6ceb89e497e48cbc6d6c9fd647c865be27f6a9220d04fd99\" pid:5649 exited_at:{seconds:1747056571 nanos:594291875}" May 12 13:29:31.614123 kubelet[2734]: I0512 13:29:31.612876 2734 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-58cc7dd69f-zlcwc" podStartSLOduration=2.612857159 podStartE2EDuration="2.612857159s" podCreationTimestamp="2025-05-12 13:29:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-12 13:29:30.561596955 +0000 UTC m=+71.413970568" watchObservedRunningTime="2025-05-12 13:29:31.612857159 +0000 UTC m=+72.465230772" May 12 13:29:31.842118 systemd-networkd[1407]: cali6d13c0e2fd4: Gained IPv6LL