May 14 23:53:25.928875 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] May 14 23:53:25.928898 kernel: Linux version 6.6.89-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT Wed May 14 22:22:56 -00 2025 May 14 23:53:25.928908 kernel: KASLR enabled May 14 23:53:25.928914 kernel: efi: EFI v2.7 by EDK II May 14 23:53:25.928919 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdbbae018 ACPI 2.0=0xd9b43018 RNG=0xd9b43a18 MEMRESERVE=0xd9b40218 May 14 23:53:25.928925 kernel: random: crng init done May 14 23:53:25.928932 kernel: secureboot: Secure boot disabled May 14 23:53:25.928938 kernel: ACPI: Early table checksum verification disabled May 14 23:53:25.928944 kernel: ACPI: RSDP 0x00000000D9B43018 000024 (v02 BOCHS ) May 14 23:53:25.928951 kernel: ACPI: XSDT 0x00000000D9B43F18 000064 (v01 BOCHS BXPC 00000001 01000013) May 14 23:53:25.928957 kernel: ACPI: FACP 0x00000000D9B43B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) May 14 23:53:25.928963 kernel: ACPI: DSDT 0x00000000D9B41018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 14 23:53:25.928969 kernel: ACPI: APIC 0x00000000D9B43C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) May 14 23:53:25.928975 kernel: ACPI: PPTT 0x00000000D9B43098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) May 14 23:53:25.928982 kernel: ACPI: GTDT 0x00000000D9B43818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 14 23:53:25.928990 kernel: ACPI: MCFG 0x00000000D9B43A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 14 23:53:25.928997 kernel: ACPI: SPCR 0x00000000D9B43918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 14 23:53:25.929003 kernel: ACPI: DBG2 0x00000000D9B43998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) May 14 23:53:25.929009 kernel: ACPI: IORT 0x00000000D9B43198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 14 23:53:25.929015 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 May 14 23:53:25.929022 kernel: NUMA: Failed to initialise from firmware May 14 23:53:25.929028 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] May 14 23:53:25.929034 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] May 14 23:53:25.929041 kernel: Zone ranges: May 14 23:53:25.929047 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] May 14 23:53:25.929055 kernel: DMA32 empty May 14 23:53:25.929061 kernel: Normal empty May 14 23:53:25.929067 kernel: Movable zone start for each node May 14 23:53:25.929074 kernel: Early memory node ranges May 14 23:53:25.929080 kernel: node 0: [mem 0x0000000040000000-0x00000000d967ffff] May 14 23:53:25.929086 kernel: node 0: [mem 0x00000000d9680000-0x00000000d968ffff] May 14 23:53:25.929099 kernel: node 0: [mem 0x00000000d9690000-0x00000000d976ffff] May 14 23:53:25.929106 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] May 14 23:53:25.929112 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] May 14 23:53:25.929118 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] May 14 23:53:25.929124 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] May 14 23:53:25.929130 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] May 14 23:53:25.929139 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] May 14 23:53:25.929145 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] May 14 23:53:25.929152 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges May 14 23:53:25.929161 kernel: psci: probing for conduit method from ACPI. May 14 23:53:25.929168 kernel: psci: PSCIv1.1 detected in firmware. May 14 23:53:25.929174 kernel: psci: Using standard PSCI v0.2 function IDs May 14 23:53:25.929183 kernel: psci: Trusted OS migration not required May 14 23:53:25.929189 kernel: psci: SMC Calling Convention v1.1 May 14 23:53:25.929196 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) May 14 23:53:25.929203 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 May 14 23:53:25.929210 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 May 14 23:53:25.929216 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 May 14 23:53:25.929223 kernel: Detected PIPT I-cache on CPU0 May 14 23:53:25.929229 kernel: CPU features: detected: GIC system register CPU interface May 14 23:53:25.929236 kernel: CPU features: detected: Hardware dirty bit management May 14 23:53:25.929242 kernel: CPU features: detected: Spectre-v4 May 14 23:53:25.929250 kernel: CPU features: detected: Spectre-BHB May 14 23:53:25.929257 kernel: CPU features: kernel page table isolation forced ON by KASLR May 14 23:53:25.929264 kernel: CPU features: detected: Kernel page table isolation (KPTI) May 14 23:53:25.929270 kernel: CPU features: detected: ARM erratum 1418040 May 14 23:53:25.929277 kernel: CPU features: detected: SSBS not fully self-synchronizing May 14 23:53:25.929283 kernel: alternatives: applying boot alternatives May 14 23:53:25.929291 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=bfa141d6f8686d8fe96245516ecbaee60c938beef41636c397e3939a2c9a6ed9 May 14 23:53:25.929298 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 14 23:53:25.929305 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 14 23:53:25.929312 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 14 23:53:25.929319 kernel: Fallback order for Node 0: 0 May 14 23:53:25.929327 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 May 14 23:53:25.929333 kernel: Policy zone: DMA May 14 23:53:25.929340 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 14 23:53:25.929346 kernel: software IO TLB: area num 4. May 14 23:53:25.929353 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) May 14 23:53:25.929360 kernel: Memory: 2387476K/2572288K available (10368K kernel code, 2186K rwdata, 8100K rodata, 38336K init, 897K bss, 184812K reserved, 0K cma-reserved) May 14 23:53:25.929366 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 14 23:53:25.929373 kernel: rcu: Preemptible hierarchical RCU implementation. May 14 23:53:25.929380 kernel: rcu: RCU event tracing is enabled. May 14 23:53:25.929387 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 14 23:53:25.929394 kernel: Trampoline variant of Tasks RCU enabled. May 14 23:53:25.929400 kernel: Tracing variant of Tasks RCU enabled. May 14 23:53:25.929409 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 14 23:53:25.929415 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 14 23:53:25.929422 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 May 14 23:53:25.929428 kernel: GICv3: 256 SPIs implemented May 14 23:53:25.929435 kernel: GICv3: 0 Extended SPIs implemented May 14 23:53:25.929441 kernel: Root IRQ handler: gic_handle_irq May 14 23:53:25.929448 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI May 14 23:53:25.929454 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 May 14 23:53:25.929461 kernel: ITS [mem 0x08080000-0x0809ffff] May 14 23:53:25.929468 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) May 14 23:53:25.929475 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) May 14 23:53:25.929483 kernel: GICv3: using LPI property table @0x00000000400f0000 May 14 23:53:25.929489 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 May 14 23:53:25.929496 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 14 23:53:25.929503 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 23:53:25.929510 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). May 14 23:53:25.929516 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns May 14 23:53:25.929523 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns May 14 23:53:25.929530 kernel: arm-pv: using stolen time PV May 14 23:53:25.929546 kernel: Console: colour dummy device 80x25 May 14 23:53:25.929553 kernel: ACPI: Core revision 20230628 May 14 23:53:25.929560 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) May 14 23:53:25.929568 kernel: pid_max: default: 32768 minimum: 301 May 14 23:53:25.929575 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 14 23:53:25.929582 kernel: landlock: Up and running. May 14 23:53:25.929588 kernel: SELinux: Initializing. May 14 23:53:25.929595 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 14 23:53:25.929602 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 14 23:53:25.929609 kernel: ACPI PPTT: PPTT table found, but unable to locate core 3 (3) May 14 23:53:25.929616 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 14 23:53:25.929623 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 14 23:53:25.929631 kernel: rcu: Hierarchical SRCU implementation. May 14 23:53:25.929638 kernel: rcu: Max phase no-delay instances is 400. May 14 23:53:25.929645 kernel: Platform MSI: ITS@0x8080000 domain created May 14 23:53:25.929652 kernel: PCI/MSI: ITS@0x8080000 domain created May 14 23:53:25.929659 kernel: Remapping and enabling EFI services. May 14 23:53:25.929665 kernel: smp: Bringing up secondary CPUs ... May 14 23:53:25.929672 kernel: Detected PIPT I-cache on CPU1 May 14 23:53:25.929679 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 May 14 23:53:25.929686 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 May 14 23:53:25.929695 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 23:53:25.929702 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] May 14 23:53:25.929713 kernel: Detected PIPT I-cache on CPU2 May 14 23:53:25.929722 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 May 14 23:53:25.929729 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 May 14 23:53:25.929736 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 23:53:25.929743 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] May 14 23:53:25.929751 kernel: Detected PIPT I-cache on CPU3 May 14 23:53:25.929758 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 May 14 23:53:25.929766 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 May 14 23:53:25.929774 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 23:53:25.929781 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] May 14 23:53:25.929788 kernel: smp: Brought up 1 node, 4 CPUs May 14 23:53:25.929795 kernel: SMP: Total of 4 processors activated. May 14 23:53:25.929802 kernel: CPU features: detected: 32-bit EL0 Support May 14 23:53:25.929810 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence May 14 23:53:25.929817 kernel: CPU features: detected: Common not Private translations May 14 23:53:25.929825 kernel: CPU features: detected: CRC32 instructions May 14 23:53:25.929833 kernel: CPU features: detected: Enhanced Virtualization Traps May 14 23:53:25.929841 kernel: CPU features: detected: RCpc load-acquire (LDAPR) May 14 23:53:25.929848 kernel: CPU features: detected: LSE atomic instructions May 14 23:53:25.929855 kernel: CPU features: detected: Privileged Access Never May 14 23:53:25.929862 kernel: CPU features: detected: RAS Extension Support May 14 23:53:25.929870 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) May 14 23:53:25.929877 kernel: CPU: All CPU(s) started at EL1 May 14 23:53:25.929884 kernel: alternatives: applying system-wide alternatives May 14 23:53:25.929894 kernel: devtmpfs: initialized May 14 23:53:25.929901 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 14 23:53:25.929909 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 14 23:53:25.929916 kernel: pinctrl core: initialized pinctrl subsystem May 14 23:53:25.929924 kernel: SMBIOS 3.0.0 present. May 14 23:53:25.929931 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 May 14 23:53:25.929938 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 14 23:53:25.929945 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations May 14 23:53:25.929953 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations May 14 23:53:25.929961 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations May 14 23:53:25.929969 kernel: audit: initializing netlink subsys (disabled) May 14 23:53:25.929976 kernel: audit: type=2000 audit(0.018:1): state=initialized audit_enabled=0 res=1 May 14 23:53:25.929984 kernel: thermal_sys: Registered thermal governor 'step_wise' May 14 23:53:25.929991 kernel: cpuidle: using governor menu May 14 23:53:25.929998 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. May 14 23:53:25.930005 kernel: ASID allocator initialised with 32768 entries May 14 23:53:25.930013 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 14 23:53:25.930020 kernel: Serial: AMBA PL011 UART driver May 14 23:53:25.930028 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL May 14 23:53:25.930036 kernel: Modules: 0 pages in range for non-PLT usage May 14 23:53:25.930043 kernel: Modules: 509264 pages in range for PLT usage May 14 23:53:25.930050 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 14 23:53:25.930058 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page May 14 23:53:25.930065 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages May 14 23:53:25.930072 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page May 14 23:53:25.930080 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 14 23:53:25.930087 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page May 14 23:53:25.930100 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages May 14 23:53:25.930108 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page May 14 23:53:25.930115 kernel: ACPI: Added _OSI(Module Device) May 14 23:53:25.930122 kernel: ACPI: Added _OSI(Processor Device) May 14 23:53:25.930130 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 14 23:53:25.930137 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 14 23:53:25.930144 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 14 23:53:25.930151 kernel: ACPI: Interpreter enabled May 14 23:53:25.930159 kernel: ACPI: Using GIC for interrupt routing May 14 23:53:25.930166 kernel: ACPI: MCFG table detected, 1 entries May 14 23:53:25.930175 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA May 14 23:53:25.930182 kernel: printk: console [ttyAMA0] enabled May 14 23:53:25.930190 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 14 23:53:25.930336 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 14 23:53:25.930417 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] May 14 23:53:25.930491 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] May 14 23:53:25.930637 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 May 14 23:53:25.930715 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] May 14 23:53:25.930725 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] May 14 23:53:25.930733 kernel: PCI host bridge to bus 0000:00 May 14 23:53:25.930806 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] May 14 23:53:25.930878 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] May 14 23:53:25.930941 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] May 14 23:53:25.931002 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 14 23:53:25.931098 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 May 14 23:53:25.931184 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 May 14 23:53:25.931256 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] May 14 23:53:25.931326 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] May 14 23:53:25.931395 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] May 14 23:53:25.931464 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] May 14 23:53:25.931544 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] May 14 23:53:25.931625 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] May 14 23:53:25.931690 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] May 14 23:53:25.931753 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] May 14 23:53:25.931814 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] May 14 23:53:25.931824 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 May 14 23:53:25.931832 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 May 14 23:53:25.931843 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 May 14 23:53:25.931852 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 May 14 23:53:25.931860 kernel: iommu: Default domain type: Translated May 14 23:53:25.931868 kernel: iommu: DMA domain TLB invalidation policy: strict mode May 14 23:53:25.931875 kernel: efivars: Registered efivars operations May 14 23:53:25.931882 kernel: vgaarb: loaded May 14 23:53:25.931890 kernel: clocksource: Switched to clocksource arch_sys_counter May 14 23:53:25.931897 kernel: VFS: Disk quotas dquot_6.6.0 May 14 23:53:25.931905 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 14 23:53:25.931912 kernel: pnp: PnP ACPI init May 14 23:53:25.931996 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved May 14 23:53:25.932007 kernel: pnp: PnP ACPI: found 1 devices May 14 23:53:25.932015 kernel: NET: Registered PF_INET protocol family May 14 23:53:25.932023 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 14 23:53:25.932030 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 14 23:53:25.932038 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 14 23:53:25.932046 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 14 23:53:25.932054 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 14 23:53:25.932063 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 14 23:53:25.932071 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 14 23:53:25.932079 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 14 23:53:25.932086 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 14 23:53:25.932102 kernel: PCI: CLS 0 bytes, default 64 May 14 23:53:25.932109 kernel: kvm [1]: HYP mode not available May 14 23:53:25.932117 kernel: Initialise system trusted keyrings May 14 23:53:25.932124 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 14 23:53:25.932131 kernel: Key type asymmetric registered May 14 23:53:25.932140 kernel: Asymmetric key parser 'x509' registered May 14 23:53:25.932148 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) May 14 23:53:25.932155 kernel: io scheduler mq-deadline registered May 14 23:53:25.932163 kernel: io scheduler kyber registered May 14 23:53:25.932170 kernel: io scheduler bfq registered May 14 23:53:25.932178 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 May 14 23:53:25.932185 kernel: ACPI: button: Power Button [PWRB] May 14 23:53:25.932193 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 May 14 23:53:25.932276 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) May 14 23:53:25.932290 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 14 23:53:25.932297 kernel: thunder_xcv, ver 1.0 May 14 23:53:25.932305 kernel: thunder_bgx, ver 1.0 May 14 23:53:25.932312 kernel: nicpf, ver 1.0 May 14 23:53:25.932320 kernel: nicvf, ver 1.0 May 14 23:53:25.932412 kernel: rtc-efi rtc-efi.0: registered as rtc0 May 14 23:53:25.932491 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-05-14T23:53:25 UTC (1747266805) May 14 23:53:25.932503 kernel: hid: raw HID events driver (C) Jiri Kosina May 14 23:53:25.932514 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available May 14 23:53:25.932522 kernel: watchdog: Delayed init of the lockup detector failed: -19 May 14 23:53:25.932550 kernel: watchdog: Hard watchdog permanently disabled May 14 23:53:25.932560 kernel: NET: Registered PF_INET6 protocol family May 14 23:53:25.932569 kernel: Segment Routing with IPv6 May 14 23:53:25.932576 kernel: In-situ OAM (IOAM) with IPv6 May 14 23:53:25.932583 kernel: NET: Registered PF_PACKET protocol family May 14 23:53:25.932591 kernel: Key type dns_resolver registered May 14 23:53:25.932598 kernel: registered taskstats version 1 May 14 23:53:25.932605 kernel: Loading compiled-in X.509 certificates May 14 23:53:25.932616 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.89-flatcar: cdb7ce3984a1665183e8a6ab3419833bc5e4e7f4' May 14 23:53:25.932623 kernel: Key type .fscrypt registered May 14 23:53:25.932630 kernel: Key type fscrypt-provisioning registered May 14 23:53:25.932637 kernel: ima: No TPM chip found, activating TPM-bypass! May 14 23:53:25.932644 kernel: ima: Allocated hash algorithm: sha1 May 14 23:53:25.932651 kernel: ima: No architecture policies found May 14 23:53:25.932659 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) May 14 23:53:25.932666 kernel: clk: Disabling unused clocks May 14 23:53:25.932675 kernel: Freeing unused kernel memory: 38336K May 14 23:53:25.932682 kernel: Run /init as init process May 14 23:53:25.932689 kernel: with arguments: May 14 23:53:25.932696 kernel: /init May 14 23:53:25.932703 kernel: with environment: May 14 23:53:25.932710 kernel: HOME=/ May 14 23:53:25.932718 kernel: TERM=linux May 14 23:53:25.932724 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 14 23:53:25.932733 systemd[1]: Successfully made /usr/ read-only. May 14 23:53:25.932745 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 14 23:53:25.932753 systemd[1]: Detected virtualization kvm. May 14 23:53:25.932761 systemd[1]: Detected architecture arm64. May 14 23:53:25.932769 systemd[1]: Running in initrd. May 14 23:53:25.932776 systemd[1]: No hostname configured, using default hostname. May 14 23:53:25.932784 systemd[1]: Hostname set to . May 14 23:53:25.932792 systemd[1]: Initializing machine ID from VM UUID. May 14 23:53:25.932801 systemd[1]: Queued start job for default target initrd.target. May 14 23:53:25.932810 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 14 23:53:25.932818 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 14 23:53:25.932826 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 14 23:53:25.932834 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 14 23:53:25.932842 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 14 23:53:25.932851 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 14 23:53:25.932861 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 14 23:53:25.932870 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 14 23:53:25.932878 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 14 23:53:25.932886 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 14 23:53:25.932894 systemd[1]: Reached target paths.target - Path Units. May 14 23:53:25.932902 systemd[1]: Reached target slices.target - Slice Units. May 14 23:53:25.932910 systemd[1]: Reached target swap.target - Swaps. May 14 23:53:25.932918 systemd[1]: Reached target timers.target - Timer Units. May 14 23:53:25.932926 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 14 23:53:25.932936 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 14 23:53:25.932944 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 14 23:53:25.932952 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 14 23:53:25.932960 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 14 23:53:25.932968 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 14 23:53:25.932976 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 14 23:53:25.932984 systemd[1]: Reached target sockets.target - Socket Units. May 14 23:53:25.932992 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 14 23:53:25.933001 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 14 23:53:25.933009 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 14 23:53:25.933017 systemd[1]: Starting systemd-fsck-usr.service... May 14 23:53:25.933025 systemd[1]: Starting systemd-journald.service - Journal Service... May 14 23:53:25.933033 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 14 23:53:25.933041 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 14 23:53:25.933049 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 14 23:53:25.933057 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 14 23:53:25.933067 systemd[1]: Finished systemd-fsck-usr.service. May 14 23:53:25.933075 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 14 23:53:25.933110 systemd-journald[239]: Collecting audit messages is disabled. May 14 23:53:25.933132 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 14 23:53:25.933141 systemd-journald[239]: Journal started May 14 23:53:25.933159 systemd-journald[239]: Runtime Journal (/run/log/journal/cba555015df24dbc8f9eaf0cbd9f3564) is 5.9M, max 47.3M, 41.4M free. May 14 23:53:25.922739 systemd-modules-load[240]: Inserted module 'overlay' May 14 23:53:25.937553 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 14 23:53:25.937587 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 14 23:53:25.939329 systemd-modules-load[240]: Inserted module 'br_netfilter' May 14 23:53:25.940338 kernel: Bridge firewalling registered May 14 23:53:25.942055 systemd[1]: Started systemd-journald.service - Journal Service. May 14 23:53:25.944076 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 14 23:53:25.945398 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 14 23:53:25.950720 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 14 23:53:25.953009 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 14 23:53:25.955767 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 14 23:53:25.960599 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 14 23:53:25.962257 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 14 23:53:25.964982 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 14 23:53:25.968645 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 14 23:53:25.973577 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 14 23:53:25.976667 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 14 23:53:25.984312 dracut-cmdline[275]: dracut-dracut-053 May 14 23:53:25.987233 dracut-cmdline[275]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=bfa141d6f8686d8fe96245516ecbaee60c938beef41636c397e3939a2c9a6ed9 May 14 23:53:26.026824 systemd-resolved[279]: Positive Trust Anchors: May 14 23:53:26.026841 systemd-resolved[279]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 14 23:53:26.026874 systemd-resolved[279]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 14 23:53:26.034508 systemd-resolved[279]: Defaulting to hostname 'linux'. May 14 23:53:26.035607 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 14 23:53:26.036874 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 14 23:53:26.062574 kernel: SCSI subsystem initialized May 14 23:53:26.067555 kernel: Loading iSCSI transport class v2.0-870. May 14 23:53:26.074562 kernel: iscsi: registered transport (tcp) May 14 23:53:26.088555 kernel: iscsi: registered transport (qla4xxx) May 14 23:53:26.088569 kernel: QLogic iSCSI HBA Driver May 14 23:53:26.140680 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 14 23:53:26.149703 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 14 23:53:26.168280 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 14 23:53:26.168333 kernel: device-mapper: uevent: version 1.0.3 May 14 23:53:26.168345 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 14 23:53:26.217570 kernel: raid6: neonx8 gen() 15788 MB/s May 14 23:53:26.234565 kernel: raid6: neonx4 gen() 15811 MB/s May 14 23:53:26.251565 kernel: raid6: neonx2 gen() 13347 MB/s May 14 23:53:26.268561 kernel: raid6: neonx1 gen() 10523 MB/s May 14 23:53:26.285560 kernel: raid6: int64x8 gen() 6788 MB/s May 14 23:53:26.302557 kernel: raid6: int64x4 gen() 7350 MB/s May 14 23:53:26.319555 kernel: raid6: int64x2 gen() 6109 MB/s May 14 23:53:26.336663 kernel: raid6: int64x1 gen() 5052 MB/s May 14 23:53:26.336678 kernel: raid6: using algorithm neonx4 gen() 15811 MB/s May 14 23:53:26.354691 kernel: raid6: .... xor() 12360 MB/s, rmw enabled May 14 23:53:26.354705 kernel: raid6: using neon recovery algorithm May 14 23:53:26.360037 kernel: xor: measuring software checksum speed May 14 23:53:26.360062 kernel: 8regs : 21579 MB/sec May 14 23:53:26.360708 kernel: 32regs : 20876 MB/sec May 14 23:53:26.361954 kernel: arm64_neon : 27766 MB/sec May 14 23:53:26.361965 kernel: xor: using function: arm64_neon (27766 MB/sec) May 14 23:53:26.412581 kernel: Btrfs loaded, zoned=no, fsverity=no May 14 23:53:26.423914 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 14 23:53:26.434737 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 14 23:53:26.448847 systemd-udevd[460]: Using default interface naming scheme 'v255'. May 14 23:53:26.452558 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 14 23:53:26.456119 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 14 23:53:26.472719 dracut-pre-trigger[468]: rd.md=0: removing MD RAID activation May 14 23:53:26.504565 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 14 23:53:26.516710 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 14 23:53:26.558992 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 14 23:53:26.567756 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 14 23:53:26.578784 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 14 23:53:26.580659 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 14 23:53:26.582822 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 14 23:53:26.584682 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 14 23:53:26.593894 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 14 23:53:26.604462 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 14 23:53:26.617287 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues May 14 23:53:26.624248 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 14 23:53:26.627585 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 14 23:53:26.627622 kernel: GPT:9289727 != 19775487 May 14 23:53:26.629038 kernel: GPT:Alternate GPT header not at the end of the disk. May 14 23:53:26.628484 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 14 23:53:26.633460 kernel: GPT:9289727 != 19775487 May 14 23:53:26.633479 kernel: GPT: Use GNU Parted to correct GPT errors. May 14 23:53:26.633489 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 14 23:53:26.628626 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 14 23:53:26.631371 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 14 23:53:26.635167 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 14 23:53:26.635303 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 14 23:53:26.640496 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 14 23:53:26.648856 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 14 23:53:26.660556 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (505) May 14 23:53:26.660607 kernel: BTRFS: device fsid 369506fd-904a-45c2-a4ab-2d03e7866799 devid 1 transid 44 /dev/vda3 scanned by (udev-worker) (515) May 14 23:53:26.669770 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 14 23:53:26.671466 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 14 23:53:26.690727 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 14 23:53:26.698606 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 14 23:53:26.704908 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 14 23:53:26.706159 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 14 23:53:26.720686 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 14 23:53:26.725692 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 14 23:53:26.731399 disk-uuid[551]: Primary Header is updated. May 14 23:53:26.731399 disk-uuid[551]: Secondary Entries is updated. May 14 23:53:26.731399 disk-uuid[551]: Secondary Header is updated. May 14 23:53:26.740570 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 14 23:53:26.748578 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 14 23:53:27.746602 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 14 23:53:27.747480 disk-uuid[552]: The operation has completed successfully. May 14 23:53:27.773749 systemd[1]: disk-uuid.service: Deactivated successfully. May 14 23:53:27.773868 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 14 23:53:27.814690 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 14 23:53:27.817637 sh[572]: Success May 14 23:53:27.834602 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" May 14 23:53:27.872356 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 14 23:53:27.874269 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 14 23:53:27.875972 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 14 23:53:27.892771 kernel: BTRFS info (device dm-0): first mount of filesystem 369506fd-904a-45c2-a4ab-2d03e7866799 May 14 23:53:27.892825 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm May 14 23:53:27.892836 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 14 23:53:27.896022 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 14 23:53:27.897188 kernel: BTRFS info (device dm-0): using free space tree May 14 23:53:27.905364 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 14 23:53:27.906741 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 14 23:53:27.921781 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 14 23:53:27.923880 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 14 23:53:27.940750 kernel: BTRFS info (device vda6): first mount of filesystem 02f9d4a0-2ee9-4834-b15d-b55399b9ff01 May 14 23:53:27.940799 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 14 23:53:27.940810 kernel: BTRFS info (device vda6): using free space tree May 14 23:53:27.943570 kernel: BTRFS info (device vda6): auto enabling async discard May 14 23:53:27.948570 kernel: BTRFS info (device vda6): last unmount of filesystem 02f9d4a0-2ee9-4834-b15d-b55399b9ff01 May 14 23:53:27.951450 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 14 23:53:27.956750 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 14 23:53:28.028101 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 14 23:53:28.036762 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 14 23:53:28.065640 ignition[665]: Ignition 2.20.0 May 14 23:53:28.065651 ignition[665]: Stage: fetch-offline May 14 23:53:28.069244 ignition[665]: no configs at "/usr/lib/ignition/base.d" May 14 23:53:28.069258 ignition[665]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 14 23:53:28.069476 ignition[665]: parsed url from cmdline: "" May 14 23:53:28.069480 ignition[665]: no config URL provided May 14 23:53:28.069485 ignition[665]: reading system config file "/usr/lib/ignition/user.ign" May 14 23:53:28.069492 ignition[665]: no config at "/usr/lib/ignition/user.ign" May 14 23:53:28.069517 ignition[665]: op(1): [started] loading QEMU firmware config module May 14 23:53:28.073944 systemd-networkd[763]: lo: Link UP May 14 23:53:28.069522 ignition[665]: op(1): executing: "modprobe" "qemu_fw_cfg" May 14 23:53:28.073948 systemd-networkd[763]: lo: Gained carrier May 14 23:53:28.075208 systemd-networkd[763]: Enumeration completed May 14 23:53:28.075465 systemd[1]: Started systemd-networkd.service - Network Configuration. May 14 23:53:28.075857 systemd-networkd[763]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 14 23:53:28.075861 systemd-networkd[763]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 14 23:53:28.076823 systemd-networkd[763]: eth0: Link UP May 14 23:53:28.083430 ignition[665]: op(1): [finished] loading QEMU firmware config module May 14 23:53:28.077189 systemd-networkd[763]: eth0: Gained carrier May 14 23:53:28.077200 systemd-networkd[763]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 14 23:53:28.077501 systemd[1]: Reached target network.target - Network. May 14 23:53:28.096586 systemd-networkd[763]: eth0: DHCPv4 address 10.0.0.62/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 14 23:53:28.128177 ignition[665]: parsing config with SHA512: 9e489dd63ec078006ca4cbcf023565aeac555ffa3548e6a899edbb9362016b76e9b974fb2996ff08dd9553f825722ae85a505f3c6542a8f4301f4604ae050b09 May 14 23:53:28.134244 unknown[665]: fetched base config from "system" May 14 23:53:28.134257 unknown[665]: fetched user config from "qemu" May 14 23:53:28.134690 ignition[665]: fetch-offline: fetch-offline passed May 14 23:53:28.136406 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 14 23:53:28.134764 ignition[665]: Ignition finished successfully May 14 23:53:28.138781 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 14 23:53:28.149755 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 14 23:53:28.163402 ignition[771]: Ignition 2.20.0 May 14 23:53:28.163411 ignition[771]: Stage: kargs May 14 23:53:28.163612 ignition[771]: no configs at "/usr/lib/ignition/base.d" May 14 23:53:28.163621 ignition[771]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 14 23:53:28.164500 ignition[771]: kargs: kargs passed May 14 23:53:28.166366 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 14 23:53:28.164569 ignition[771]: Ignition finished successfully May 14 23:53:28.176752 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 14 23:53:28.187263 ignition[781]: Ignition 2.20.0 May 14 23:53:28.187274 ignition[781]: Stage: disks May 14 23:53:28.187454 ignition[781]: no configs at "/usr/lib/ignition/base.d" May 14 23:53:28.187464 ignition[781]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 14 23:53:28.189977 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 14 23:53:28.188356 ignition[781]: disks: disks passed May 14 23:53:28.191781 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 14 23:53:28.188411 ignition[781]: Ignition finished successfully May 14 23:53:28.193494 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 14 23:53:28.195161 systemd[1]: Reached target local-fs.target - Local File Systems. May 14 23:53:28.197005 systemd[1]: Reached target sysinit.target - System Initialization. May 14 23:53:28.198514 systemd[1]: Reached target basic.target - Basic System. May 14 23:53:28.213730 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 14 23:53:28.224973 systemd-fsck[792]: ROOT: clean, 14/553520 files, 52654/553472 blocks May 14 23:53:28.249768 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 14 23:53:28.262689 systemd[1]: Mounting sysroot.mount - /sysroot... May 14 23:53:28.313568 kernel: EXT4-fs (vda9): mounted filesystem 737cda88-7069-47ce-b2bc-d891099a68fb r/w with ordered data mode. Quota mode: none. May 14 23:53:28.314255 systemd[1]: Mounted sysroot.mount - /sysroot. May 14 23:53:28.315618 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 14 23:53:28.331642 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 14 23:53:28.334009 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 14 23:53:28.335024 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 14 23:53:28.335069 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 14 23:53:28.335104 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 14 23:53:28.340775 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 14 23:53:28.342515 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 14 23:53:28.349560 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (800) May 14 23:53:28.351989 kernel: BTRFS info (device vda6): first mount of filesystem 02f9d4a0-2ee9-4834-b15d-b55399b9ff01 May 14 23:53:28.352013 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 14 23:53:28.352024 kernel: BTRFS info (device vda6): using free space tree May 14 23:53:28.354557 kernel: BTRFS info (device vda6): auto enabling async discard May 14 23:53:28.355717 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 14 23:53:28.390318 initrd-setup-root[824]: cut: /sysroot/etc/passwd: No such file or directory May 14 23:53:28.400632 initrd-setup-root[831]: cut: /sysroot/etc/group: No such file or directory May 14 23:53:28.404344 initrd-setup-root[838]: cut: /sysroot/etc/shadow: No such file or directory May 14 23:53:28.408373 initrd-setup-root[845]: cut: /sysroot/etc/gshadow: No such file or directory May 14 23:53:28.483773 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 14 23:53:28.495653 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 14 23:53:28.497916 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 14 23:53:28.501552 kernel: BTRFS info (device vda6): last unmount of filesystem 02f9d4a0-2ee9-4834-b15d-b55399b9ff01 May 14 23:53:28.525349 ignition[914]: INFO : Ignition 2.20.0 May 14 23:53:28.525349 ignition[914]: INFO : Stage: mount May 14 23:53:28.527833 ignition[914]: INFO : no configs at "/usr/lib/ignition/base.d" May 14 23:53:28.527833 ignition[914]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 14 23:53:28.527833 ignition[914]: INFO : mount: mount passed May 14 23:53:28.527833 ignition[914]: INFO : Ignition finished successfully May 14 23:53:28.525852 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 14 23:53:28.528627 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 14 23:53:28.536630 systemd[1]: Starting ignition-files.service - Ignition (files)... May 14 23:53:29.040158 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 14 23:53:29.052738 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 14 23:53:29.060681 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (927) May 14 23:53:29.060722 kernel: BTRFS info (device vda6): first mount of filesystem 02f9d4a0-2ee9-4834-b15d-b55399b9ff01 May 14 23:53:29.060733 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 14 23:53:29.062545 kernel: BTRFS info (device vda6): using free space tree May 14 23:53:29.064554 kernel: BTRFS info (device vda6): auto enabling async discard May 14 23:53:29.066022 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 14 23:53:29.081850 ignition[944]: INFO : Ignition 2.20.0 May 14 23:53:29.081850 ignition[944]: INFO : Stage: files May 14 23:53:29.083506 ignition[944]: INFO : no configs at "/usr/lib/ignition/base.d" May 14 23:53:29.083506 ignition[944]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 14 23:53:29.083506 ignition[944]: DEBUG : files: compiled without relabeling support, skipping May 14 23:53:29.087066 ignition[944]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 14 23:53:29.087066 ignition[944]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 14 23:53:29.089843 ignition[944]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 14 23:53:29.089843 ignition[944]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 14 23:53:29.089843 ignition[944]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 14 23:53:29.089213 unknown[944]: wrote ssh authorized keys file for user: core May 14 23:53:29.094860 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 14 23:53:29.094860 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 May 14 23:53:29.152029 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 14 23:53:29.342639 systemd-networkd[763]: eth0: Gained IPv6LL May 14 23:53:29.407177 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 14 23:53:29.409459 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" May 14 23:53:29.409459 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" May 14 23:53:29.409459 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" May 14 23:53:29.409459 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" May 14 23:53:29.409459 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 14 23:53:29.409459 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 14 23:53:29.409459 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 14 23:53:29.409459 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 14 23:53:29.409459 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" May 14 23:53:29.409459 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 14 23:53:29.409459 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 14 23:53:29.409459 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 14 23:53:29.409459 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 14 23:53:29.409459 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 May 14 23:53:29.762069 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK May 14 23:53:30.167009 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 14 23:53:30.167009 ignition[944]: INFO : files: op(b): [started] processing unit "prepare-helm.service" May 14 23:53:30.170839 ignition[944]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 14 23:53:30.170839 ignition[944]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 14 23:53:30.170839 ignition[944]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" May 14 23:53:30.170839 ignition[944]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" May 14 23:53:30.170839 ignition[944]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 14 23:53:30.170839 ignition[944]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 14 23:53:30.170839 ignition[944]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" May 14 23:53:30.170839 ignition[944]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" May 14 23:53:30.201770 ignition[944]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" May 14 23:53:30.205786 ignition[944]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 14 23:53:30.207546 ignition[944]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" May 14 23:53:30.207546 ignition[944]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" May 14 23:53:30.207546 ignition[944]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" May 14 23:53:30.207546 ignition[944]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" May 14 23:53:30.207546 ignition[944]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" May 14 23:53:30.207546 ignition[944]: INFO : files: files passed May 14 23:53:30.207546 ignition[944]: INFO : Ignition finished successfully May 14 23:53:30.208877 systemd[1]: Finished ignition-files.service - Ignition (files). May 14 23:53:30.221724 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 14 23:53:30.225322 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 14 23:53:30.226902 systemd[1]: ignition-quench.service: Deactivated successfully. May 14 23:53:30.226991 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 14 23:53:30.234462 initrd-setup-root-after-ignition[974]: grep: /sysroot/oem/oem-release: No such file or directory May 14 23:53:30.236718 initrd-setup-root-after-ignition[976]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 14 23:53:30.236718 initrd-setup-root-after-ignition[976]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 14 23:53:30.240065 initrd-setup-root-after-ignition[980]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 14 23:53:30.239275 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 14 23:53:30.241877 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 14 23:53:30.252733 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 14 23:53:30.274516 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 14 23:53:30.274661 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 14 23:53:30.277146 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 14 23:53:30.278831 systemd[1]: Reached target initrd.target - Initrd Default Target. May 14 23:53:30.280724 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 14 23:53:30.282108 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 14 23:53:30.302612 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 14 23:53:30.314700 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 14 23:53:30.325280 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 14 23:53:30.326685 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 14 23:53:30.328928 systemd[1]: Stopped target timers.target - Timer Units. May 14 23:53:30.330875 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 14 23:53:30.331002 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 14 23:53:30.333791 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 14 23:53:30.336001 systemd[1]: Stopped target basic.target - Basic System. May 14 23:53:30.337802 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 14 23:53:30.339788 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 14 23:53:30.341977 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 14 23:53:30.344190 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 14 23:53:30.346273 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 14 23:53:30.348436 systemd[1]: Stopped target sysinit.target - System Initialization. May 14 23:53:30.350658 systemd[1]: Stopped target local-fs.target - Local File Systems. May 14 23:53:30.352660 systemd[1]: Stopped target swap.target - Swaps. May 14 23:53:30.354376 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 14 23:53:30.354504 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 14 23:53:30.357141 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 14 23:53:30.359350 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 14 23:53:30.361516 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 14 23:53:30.362596 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 14 23:53:30.363927 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 14 23:53:30.364050 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 14 23:53:30.367202 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 14 23:53:30.367328 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 14 23:53:30.369582 systemd[1]: Stopped target paths.target - Path Units. May 14 23:53:30.371362 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 14 23:53:30.371475 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 14 23:53:30.373640 systemd[1]: Stopped target slices.target - Slice Units. May 14 23:53:30.375643 systemd[1]: Stopped target sockets.target - Socket Units. May 14 23:53:30.377378 systemd[1]: iscsid.socket: Deactivated successfully. May 14 23:53:30.377462 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 14 23:53:30.379507 systemd[1]: iscsiuio.socket: Deactivated successfully. May 14 23:53:30.379604 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 14 23:53:30.381950 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 14 23:53:30.382069 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 14 23:53:30.383922 systemd[1]: ignition-files.service: Deactivated successfully. May 14 23:53:30.384023 systemd[1]: Stopped ignition-files.service - Ignition (files). May 14 23:53:30.397732 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 14 23:53:30.399559 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 14 23:53:30.400550 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 14 23:53:30.400696 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 14 23:53:30.402865 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 14 23:53:30.402969 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 14 23:53:30.410409 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 14 23:53:30.413866 ignition[1000]: INFO : Ignition 2.20.0 May 14 23:53:30.413866 ignition[1000]: INFO : Stage: umount May 14 23:53:30.413866 ignition[1000]: INFO : no configs at "/usr/lib/ignition/base.d" May 14 23:53:30.413866 ignition[1000]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 14 23:53:30.413866 ignition[1000]: INFO : umount: umount passed May 14 23:53:30.413866 ignition[1000]: INFO : Ignition finished successfully May 14 23:53:30.410508 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 14 23:53:30.413262 systemd[1]: ignition-mount.service: Deactivated successfully. May 14 23:53:30.413360 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 14 23:53:30.415272 systemd[1]: Stopped target network.target - Network. May 14 23:53:30.422961 systemd[1]: ignition-disks.service: Deactivated successfully. May 14 23:53:30.423049 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 14 23:53:30.425231 systemd[1]: ignition-kargs.service: Deactivated successfully. May 14 23:53:30.425281 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 14 23:53:30.427016 systemd[1]: ignition-setup.service: Deactivated successfully. May 14 23:53:30.427058 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 14 23:53:30.428718 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 14 23:53:30.428762 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 14 23:53:30.431106 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 14 23:53:30.433101 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 14 23:53:30.435745 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 14 23:53:30.447467 systemd[1]: systemd-resolved.service: Deactivated successfully. May 14 23:53:30.447600 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 14 23:53:30.450707 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 14 23:53:30.450920 systemd[1]: systemd-networkd.service: Deactivated successfully. May 14 23:53:30.451009 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 14 23:53:30.454630 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 14 23:53:30.455216 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 14 23:53:30.455268 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 14 23:53:30.469247 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 14 23:53:30.470244 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 14 23:53:30.470317 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 14 23:53:30.472561 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 14 23:53:30.472619 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 14 23:53:30.475838 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 14 23:53:30.475887 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 14 23:53:30.477049 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 14 23:53:30.477106 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 14 23:53:30.480059 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 14 23:53:30.481968 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 14 23:53:30.482026 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 14 23:53:30.482353 systemd[1]: sysroot-boot.service: Deactivated successfully. May 14 23:53:30.484464 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 14 23:53:30.490585 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 14 23:53:30.490656 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 14 23:53:30.493767 systemd[1]: network-cleanup.service: Deactivated successfully. May 14 23:53:30.495592 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 14 23:53:30.504176 systemd[1]: systemd-udevd.service: Deactivated successfully. May 14 23:53:30.504323 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 14 23:53:30.506502 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 14 23:53:30.506555 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 14 23:53:30.509275 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 14 23:53:30.509309 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 14 23:53:30.511223 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 14 23:53:30.511277 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 14 23:53:30.515862 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 14 23:53:30.515914 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 14 23:53:30.519481 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 14 23:53:30.519528 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 14 23:53:30.530745 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 14 23:53:30.531819 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 14 23:53:30.531881 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 14 23:53:30.535083 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. May 14 23:53:30.535136 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 14 23:53:30.537482 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 14 23:53:30.537542 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 14 23:53:30.539630 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 14 23:53:30.539678 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 14 23:53:30.543519 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. May 14 23:53:30.543593 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 14 23:53:30.543899 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 14 23:53:30.543993 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 14 23:53:30.547588 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 14 23:53:30.550454 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 14 23:53:30.560464 systemd[1]: Switching root. May 14 23:53:30.587214 systemd-journald[239]: Journal stopped May 14 23:53:31.439582 systemd-journald[239]: Received SIGTERM from PID 1 (systemd). May 14 23:53:31.439632 kernel: SELinux: policy capability network_peer_controls=1 May 14 23:53:31.439644 kernel: SELinux: policy capability open_perms=1 May 14 23:53:31.439654 kernel: SELinux: policy capability extended_socket_class=1 May 14 23:53:31.439664 kernel: SELinux: policy capability always_check_network=0 May 14 23:53:31.439676 kernel: SELinux: policy capability cgroup_seclabel=1 May 14 23:53:31.439690 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 14 23:53:31.439699 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 14 23:53:31.439708 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 14 23:53:31.439718 kernel: audit: type=1403 audit(1747266810.737:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 14 23:53:31.439729 systemd[1]: Successfully loaded SELinux policy in 37.167ms. May 14 23:53:31.439745 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 11.658ms. May 14 23:53:31.439756 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 14 23:53:31.439767 systemd[1]: Detected virtualization kvm. May 14 23:53:31.439778 systemd[1]: Detected architecture arm64. May 14 23:53:31.439788 systemd[1]: Detected first boot. May 14 23:53:31.439799 systemd[1]: Initializing machine ID from VM UUID. May 14 23:53:31.439809 zram_generator::config[1047]: No configuration found. May 14 23:53:31.439822 kernel: NET: Registered PF_VSOCK protocol family May 14 23:53:31.439831 systemd[1]: Populated /etc with preset unit settings. May 14 23:53:31.439842 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 14 23:53:31.439852 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 14 23:53:31.439863 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 14 23:53:31.439874 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 14 23:53:31.439884 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 14 23:53:31.439894 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 14 23:53:31.439904 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 14 23:53:31.439915 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 14 23:53:31.439925 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 14 23:53:31.439939 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 14 23:53:31.439950 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 14 23:53:31.439961 systemd[1]: Created slice user.slice - User and Session Slice. May 14 23:53:31.439972 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 14 23:53:31.439982 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 14 23:53:31.439992 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 14 23:53:31.440002 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 14 23:53:31.440013 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 14 23:53:31.440024 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 14 23:53:31.440034 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... May 14 23:53:31.440045 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 14 23:53:31.440058 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 14 23:53:31.440068 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 14 23:53:31.440086 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 14 23:53:31.440097 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 14 23:53:31.440108 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 14 23:53:31.440122 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 14 23:53:31.440132 systemd[1]: Reached target slices.target - Slice Units. May 14 23:53:31.440144 systemd[1]: Reached target swap.target - Swaps. May 14 23:53:31.440154 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 14 23:53:31.440165 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 14 23:53:31.440175 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 14 23:53:31.440185 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 14 23:53:31.440195 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 14 23:53:31.440205 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 14 23:53:31.440216 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 14 23:53:31.440226 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 14 23:53:31.440236 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 14 23:53:31.440248 systemd[1]: Mounting media.mount - External Media Directory... May 14 23:53:31.440258 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 14 23:53:31.440268 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 14 23:53:31.440280 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 14 23:53:31.440291 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 14 23:53:31.440302 systemd[1]: Reached target machines.target - Containers. May 14 23:53:31.440313 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 14 23:53:31.440323 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 14 23:53:31.440335 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 14 23:53:31.440345 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 14 23:53:31.440356 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 14 23:53:31.440367 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 14 23:53:31.440377 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 14 23:53:31.440388 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 14 23:53:31.440398 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 14 23:53:31.440409 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 14 23:53:31.440422 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 14 23:53:31.440433 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 14 23:53:31.440443 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 14 23:53:31.440454 systemd[1]: Stopped systemd-fsck-usr.service. May 14 23:53:31.440465 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 14 23:53:31.440475 kernel: loop: module loaded May 14 23:53:31.440485 kernel: fuse: init (API version 7.39) May 14 23:53:31.440495 kernel: ACPI: bus type drm_connector registered May 14 23:53:31.440506 systemd[1]: Starting systemd-journald.service - Journal Service... May 14 23:53:31.440529 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 14 23:53:31.440548 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 14 23:53:31.440573 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 14 23:53:31.440584 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 14 23:53:31.440596 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 14 23:53:31.440606 systemd[1]: verity-setup.service: Deactivated successfully. May 14 23:53:31.440616 systemd[1]: Stopped verity-setup.service. May 14 23:53:31.440629 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 14 23:53:31.440658 systemd-journald[1120]: Collecting audit messages is disabled. May 14 23:53:31.440681 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 14 23:53:31.440692 systemd-journald[1120]: Journal started May 14 23:53:31.440715 systemd-journald[1120]: Runtime Journal (/run/log/journal/cba555015df24dbc8f9eaf0cbd9f3564) is 5.9M, max 47.3M, 41.4M free. May 14 23:53:31.206203 systemd[1]: Queued start job for default target multi-user.target. May 14 23:53:31.219516 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 14 23:53:31.219964 systemd[1]: systemd-journald.service: Deactivated successfully. May 14 23:53:31.443278 systemd[1]: Started systemd-journald.service - Journal Service. May 14 23:53:31.443926 systemd[1]: Mounted media.mount - External Media Directory. May 14 23:53:31.445051 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 14 23:53:31.446253 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 14 23:53:31.447476 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 14 23:53:31.450593 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 14 23:53:31.452088 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 14 23:53:31.453668 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 14 23:53:31.453840 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 14 23:53:31.455282 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 14 23:53:31.455436 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 14 23:53:31.456944 systemd[1]: modprobe@drm.service: Deactivated successfully. May 14 23:53:31.457127 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 14 23:53:31.458433 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 14 23:53:31.458609 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 14 23:53:31.460042 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 14 23:53:31.460223 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 14 23:53:31.461582 systemd[1]: modprobe@loop.service: Deactivated successfully. May 14 23:53:31.461734 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 14 23:53:31.463116 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 14 23:53:31.464737 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 14 23:53:31.466382 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 14 23:53:31.467946 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 14 23:53:31.480203 systemd[1]: Reached target network-pre.target - Preparation for Network. May 14 23:53:31.490620 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 14 23:53:31.492593 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 14 23:53:31.493689 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 14 23:53:31.493742 systemd[1]: Reached target local-fs.target - Local File Systems. May 14 23:53:31.495612 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 14 23:53:31.497756 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 14 23:53:31.499753 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 14 23:53:31.500841 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 14 23:53:31.502385 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 14 23:53:31.504633 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 14 23:53:31.505975 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 14 23:53:31.507414 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 14 23:53:31.512610 systemd-journald[1120]: Time spent on flushing to /var/log/journal/cba555015df24dbc8f9eaf0cbd9f3564 is 11.161ms for 867 entries. May 14 23:53:31.512610 systemd-journald[1120]: System Journal (/var/log/journal/cba555015df24dbc8f9eaf0cbd9f3564) is 8M, max 195.6M, 187.6M free. May 14 23:53:31.531859 systemd-journald[1120]: Received client request to flush runtime journal. May 14 23:53:31.531901 kernel: loop0: detected capacity change from 0 to 123192 May 14 23:53:31.512004 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 14 23:53:31.513213 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 14 23:53:31.516941 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 14 23:53:31.519718 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 14 23:53:31.526392 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 14 23:53:31.527922 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 14 23:53:31.530000 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 14 23:53:31.531836 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 14 23:53:31.534984 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 14 23:53:31.536822 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 14 23:53:31.539033 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 14 23:53:31.545482 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 14 23:53:31.551590 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 14 23:53:31.553773 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 14 23:53:31.556605 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 14 23:53:31.557258 systemd-tmpfiles[1165]: ACLs are not supported, ignoring. May 14 23:53:31.557318 systemd-tmpfiles[1165]: ACLs are not supported, ignoring. May 14 23:53:31.564277 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 14 23:53:31.567509 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 14 23:53:31.579235 udevadm[1180]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. May 14 23:53:31.579572 kernel: loop1: detected capacity change from 0 to 113512 May 14 23:53:31.580411 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 14 23:53:31.606656 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 14 23:53:31.619741 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 14 23:53:31.627566 kernel: loop2: detected capacity change from 0 to 194096 May 14 23:53:31.631477 systemd-tmpfiles[1187]: ACLs are not supported, ignoring. May 14 23:53:31.631495 systemd-tmpfiles[1187]: ACLs are not supported, ignoring. May 14 23:53:31.635654 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 14 23:53:31.672557 kernel: loop3: detected capacity change from 0 to 123192 May 14 23:53:31.681679 kernel: loop4: detected capacity change from 0 to 113512 May 14 23:53:31.688555 kernel: loop5: detected capacity change from 0 to 194096 May 14 23:53:31.698725 (sd-merge)[1191]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. May 14 23:53:31.699153 (sd-merge)[1191]: Merged extensions into '/usr'. May 14 23:53:31.704081 systemd[1]: Reload requested from client PID 1164 ('systemd-sysext') (unit systemd-sysext.service)... May 14 23:53:31.704236 systemd[1]: Reloading... May 14 23:53:31.764616 zram_generator::config[1216]: No configuration found. May 14 23:53:31.871506 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 23:53:31.881409 ldconfig[1159]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 14 23:53:31.925029 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 14 23:53:31.925378 systemd[1]: Reloading finished in 220 ms. May 14 23:53:31.945339 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 14 23:53:31.946922 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 14 23:53:31.959801 systemd[1]: Starting ensure-sysext.service... May 14 23:53:31.961833 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 14 23:53:31.972295 systemd[1]: Reload requested from client PID 1254 ('systemctl') (unit ensure-sysext.service)... May 14 23:53:31.972310 systemd[1]: Reloading... May 14 23:53:31.980435 systemd-tmpfiles[1255]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 14 23:53:31.980720 systemd-tmpfiles[1255]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 14 23:53:31.981360 systemd-tmpfiles[1255]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 14 23:53:31.981593 systemd-tmpfiles[1255]: ACLs are not supported, ignoring. May 14 23:53:31.981649 systemd-tmpfiles[1255]: ACLs are not supported, ignoring. May 14 23:53:31.984418 systemd-tmpfiles[1255]: Detected autofs mount point /boot during canonicalization of boot. May 14 23:53:31.984433 systemd-tmpfiles[1255]: Skipping /boot May 14 23:53:31.993710 systemd-tmpfiles[1255]: Detected autofs mount point /boot during canonicalization of boot. May 14 23:53:31.993726 systemd-tmpfiles[1255]: Skipping /boot May 14 23:53:32.029555 zram_generator::config[1287]: No configuration found. May 14 23:53:32.110000 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 23:53:32.160208 systemd[1]: Reloading finished in 187 ms. May 14 23:53:32.170316 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 14 23:53:32.194974 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 14 23:53:32.203863 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 14 23:53:32.206470 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 14 23:53:32.208974 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 14 23:53:32.212818 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 14 23:53:32.216812 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 14 23:53:32.219320 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 14 23:53:32.223604 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 14 23:53:32.226198 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 14 23:53:32.235630 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 14 23:53:32.239974 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 14 23:53:32.242181 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 14 23:53:32.242326 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 14 23:53:32.245381 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 14 23:53:32.245597 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 14 23:53:32.248214 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 14 23:53:32.248563 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 14 23:53:32.250396 systemd[1]: modprobe@loop.service: Deactivated successfully. May 14 23:53:32.250557 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 14 23:53:32.262313 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 14 23:53:32.277695 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 14 23:53:32.286797 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 14 23:53:32.308974 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 14 23:53:32.311310 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 14 23:53:32.320404 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 14 23:53:32.322834 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 14 23:53:32.324832 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 14 23:53:32.324939 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 14 23:53:32.326950 systemd-udevd[1325]: Using default interface naming scheme 'v255'. May 14 23:53:32.327746 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 14 23:53:32.334738 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 14 23:53:32.339441 systemd[1]: Finished ensure-sysext.service. May 14 23:53:32.340777 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 14 23:53:32.342511 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 14 23:53:32.342680 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 14 23:53:32.351851 systemd[1]: modprobe@drm.service: Deactivated successfully. May 14 23:53:32.351991 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 14 23:53:32.353361 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 14 23:53:32.353502 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 14 23:53:32.355164 systemd[1]: modprobe@loop.service: Deactivated successfully. May 14 23:53:32.355318 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 14 23:53:32.356840 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 14 23:53:32.364253 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 14 23:53:32.364333 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 14 23:53:32.368754 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 14 23:53:32.370221 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 14 23:53:32.380925 augenrules[1368]: No rules May 14 23:53:32.381870 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 14 23:53:32.384177 systemd[1]: audit-rules.service: Deactivated successfully. May 14 23:53:32.384381 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 14 23:53:32.396722 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 14 23:53:32.404975 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 14 23:53:32.439565 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 44 scanned by (udev-worker) (1375) May 14 23:53:32.467653 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. May 14 23:53:32.474706 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 14 23:53:32.480742 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 14 23:53:32.506470 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 14 23:53:32.508350 systemd[1]: Reached target time-set.target - System Time Set. May 14 23:53:32.508752 systemd-networkd[1388]: lo: Link UP May 14 23:53:32.508761 systemd-networkd[1388]: lo: Gained carrier May 14 23:53:32.509988 systemd-resolved[1323]: Positive Trust Anchors: May 14 23:53:32.512676 systemd-networkd[1388]: Enumeration completed May 14 23:53:32.512867 systemd[1]: Started systemd-networkd.service - Network Configuration. May 14 23:53:32.515825 systemd-resolved[1323]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 14 23:53:32.515862 systemd-resolved[1323]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 14 23:53:32.516605 systemd-networkd[1388]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 14 23:53:32.516615 systemd-networkd[1388]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 14 23:53:32.517186 systemd-networkd[1388]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 14 23:53:32.517213 systemd-networkd[1388]: eth0: Link UP May 14 23:53:32.517215 systemd-networkd[1388]: eth0: Gained carrier May 14 23:53:32.517222 systemd-networkd[1388]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 14 23:53:32.522052 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 14 23:53:32.522402 systemd-resolved[1323]: Defaulting to hostname 'linux'. May 14 23:53:32.528504 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 14 23:53:32.529998 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 14 23:53:32.532803 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 14 23:53:32.535791 systemd[1]: Reached target network.target - Network. May 14 23:53:32.537097 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 14 23:53:32.538713 systemd-networkd[1388]: eth0: DHCPv4 address 10.0.0.62/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 14 23:53:32.540622 systemd-timesyncd[1365]: Network configuration changed, trying to establish connection. May 14 23:53:32.544710 systemd-timesyncd[1365]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 14 23:53:32.544766 systemd-timesyncd[1365]: Initial clock synchronization to Wed 2025-05-14 23:53:32.342702 UTC. May 14 23:53:32.554254 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 14 23:53:32.579851 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 14 23:53:32.599739 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 14 23:53:32.613747 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 14 23:53:32.628520 lvm[1422]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 14 23:53:32.631311 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 14 23:53:32.667077 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 14 23:53:32.668648 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 14 23:53:32.669794 systemd[1]: Reached target sysinit.target - System Initialization. May 14 23:53:32.670937 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 14 23:53:32.672192 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 14 23:53:32.673662 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 14 23:53:32.674809 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 14 23:53:32.676052 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 14 23:53:32.677424 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 14 23:53:32.677456 systemd[1]: Reached target paths.target - Path Units. May 14 23:53:32.678362 systemd[1]: Reached target timers.target - Timer Units. May 14 23:53:32.682907 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 14 23:53:32.685299 systemd[1]: Starting docker.socket - Docker Socket for the API... May 14 23:53:32.688413 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 14 23:53:32.689857 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 14 23:53:32.691100 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 14 23:53:32.696488 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 14 23:53:32.697930 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 14 23:53:32.700271 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 14 23:53:32.701917 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 14 23:53:32.703095 systemd[1]: Reached target sockets.target - Socket Units. May 14 23:53:32.704064 systemd[1]: Reached target basic.target - Basic System. May 14 23:53:32.705054 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 14 23:53:32.705097 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 14 23:53:32.706009 systemd[1]: Starting containerd.service - containerd container runtime... May 14 23:53:32.707860 lvm[1429]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 14 23:53:32.709747 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 14 23:53:32.712856 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 14 23:53:32.716734 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 14 23:53:32.719747 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 14 23:53:32.721208 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 14 23:53:32.723345 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 14 23:53:32.724426 jq[1432]: false May 14 23:53:32.726737 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 14 23:53:32.729027 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 14 23:53:32.735095 systemd[1]: Starting systemd-logind.service - User Login Management... May 14 23:53:32.737024 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 14 23:53:32.737468 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 14 23:53:32.738092 systemd[1]: Starting update-engine.service - Update Engine... May 14 23:53:32.738629 extend-filesystems[1433]: Found loop3 May 14 23:53:32.740287 extend-filesystems[1433]: Found loop4 May 14 23:53:32.740287 extend-filesystems[1433]: Found loop5 May 14 23:53:32.740287 extend-filesystems[1433]: Found vda May 14 23:53:32.740287 extend-filesystems[1433]: Found vda1 May 14 23:53:32.740287 extend-filesystems[1433]: Found vda2 May 14 23:53:32.740287 extend-filesystems[1433]: Found vda3 May 14 23:53:32.740287 extend-filesystems[1433]: Found usr May 14 23:53:32.740287 extend-filesystems[1433]: Found vda4 May 14 23:53:32.740287 extend-filesystems[1433]: Found vda6 May 14 23:53:32.740287 extend-filesystems[1433]: Found vda7 May 14 23:53:32.740287 extend-filesystems[1433]: Found vda9 May 14 23:53:32.740287 extend-filesystems[1433]: Checking size of /dev/vda9 May 14 23:53:32.740242 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 14 23:53:32.767743 extend-filesystems[1433]: Resized partition /dev/vda9 May 14 23:53:32.755766 dbus-daemon[1431]: [system] SELinux support is enabled May 14 23:53:32.745420 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 14 23:53:32.769429 extend-filesystems[1455]: resize2fs 1.47.1 (20-May-2024) May 14 23:53:32.752299 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 14 23:53:32.752496 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 14 23:53:32.772301 jq[1448]: true May 14 23:53:32.772558 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 14 23:53:32.752759 systemd[1]: motdgen.service: Deactivated successfully. May 14 23:53:32.752910 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 14 23:53:32.759456 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 14 23:53:32.773945 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 14 23:53:32.774180 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 14 23:53:32.790428 jq[1457]: true May 14 23:53:32.794516 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 44 scanned by (udev-worker) (1382) May 14 23:53:32.791437 (ntainerd)[1458]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 14 23:53:32.805396 tar[1456]: linux-arm64/helm May 14 23:53:32.808644 update_engine[1445]: I20250514 23:53:32.808473 1445 main.cc:92] Flatcar Update Engine starting May 14 23:53:32.809270 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 14 23:53:32.809304 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 14 23:53:32.811283 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 14 23:53:32.811402 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 14 23:53:32.815927 systemd[1]: Started update-engine.service - Update Engine. May 14 23:53:32.816974 update_engine[1445]: I20250514 23:53:32.816015 1445 update_check_scheduler.cc:74] Next update check in 6m38s May 14 23:53:32.822919 systemd-logind[1440]: Watching system buttons on /dev/input/event0 (Power Button) May 14 23:53:32.825087 systemd-logind[1440]: New seat seat0. May 14 23:53:32.833396 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 14 23:53:32.834644 systemd[1]: Started systemd-logind.service - User Login Management. May 14 23:53:32.839552 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 14 23:53:32.861728 extend-filesystems[1455]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 14 23:53:32.861728 extend-filesystems[1455]: old_desc_blocks = 1, new_desc_blocks = 1 May 14 23:53:32.861728 extend-filesystems[1455]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 14 23:53:32.865825 extend-filesystems[1433]: Resized filesystem in /dev/vda9 May 14 23:53:32.866217 systemd[1]: extend-filesystems.service: Deactivated successfully. May 14 23:53:32.867957 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 14 23:53:32.892614 locksmithd[1475]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 14 23:53:32.909574 bash[1485]: Updated "/home/core/.ssh/authorized_keys" May 14 23:53:32.912098 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 14 23:53:32.913883 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 14 23:53:33.000555 containerd[1458]: time="2025-05-14T23:53:33.000340920Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 May 14 23:53:33.026748 containerd[1458]: time="2025-05-14T23:53:33.026525842Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 14 23:53:33.028186 containerd[1458]: time="2025-05-14T23:53:33.028153379Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.89-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 14 23:53:33.029668 containerd[1458]: time="2025-05-14T23:53:33.028318464Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 14 23:53:33.029668 containerd[1458]: time="2025-05-14T23:53:33.028345172Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 14 23:53:33.029668 containerd[1458]: time="2025-05-14T23:53:33.028541138Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 May 14 23:53:33.029668 containerd[1458]: time="2025-05-14T23:53:33.028560828Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 May 14 23:53:33.029668 containerd[1458]: time="2025-05-14T23:53:33.028617520Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 May 14 23:53:33.029668 containerd[1458]: time="2025-05-14T23:53:33.028632687Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 14 23:53:33.029668 containerd[1458]: time="2025-05-14T23:53:33.028822726Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 14 23:53:33.029668 containerd[1458]: time="2025-05-14T23:53:33.028835827Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 14 23:53:33.029668 containerd[1458]: time="2025-05-14T23:53:33.028849708Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 May 14 23:53:33.029668 containerd[1458]: time="2025-05-14T23:53:33.028860508Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 14 23:53:33.029668 containerd[1458]: time="2025-05-14T23:53:33.028929131Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 14 23:53:33.029668 containerd[1458]: time="2025-05-14T23:53:33.029113595Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 14 23:53:33.029923 containerd[1458]: time="2025-05-14T23:53:33.029226316Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 14 23:53:33.029923 containerd[1458]: time="2025-05-14T23:53:33.029238364Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 14 23:53:33.029923 containerd[1458]: time="2025-05-14T23:53:33.029319386Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 14 23:53:33.029923 containerd[1458]: time="2025-05-14T23:53:33.029377014Z" level=info msg="metadata content store policy set" policy=shared May 14 23:53:33.033398 containerd[1458]: time="2025-05-14T23:53:33.033371227Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 14 23:53:33.033591 containerd[1458]: time="2025-05-14T23:53:33.033572885Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 14 23:53:33.033706 containerd[1458]: time="2025-05-14T23:53:33.033642873Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 May 14 23:53:33.033770 containerd[1458]: time="2025-05-14T23:53:33.033757544Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 May 14 23:53:33.033822 containerd[1458]: time="2025-05-14T23:53:33.033810220Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 14 23:53:33.034062 containerd[1458]: time="2025-05-14T23:53:33.034042095Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 14 23:53:33.034556 containerd[1458]: time="2025-05-14T23:53:33.034514581Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 14 23:53:33.034688 containerd[1458]: time="2025-05-14T23:53:33.034669022Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 May 14 23:53:33.034729 containerd[1458]: time="2025-05-14T23:53:33.034692884Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 May 14 23:53:33.034729 containerd[1458]: time="2025-05-14T23:53:33.034709182Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 May 14 23:53:33.034729 containerd[1458]: time="2025-05-14T23:53:33.034724193Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 14 23:53:33.034779 containerd[1458]: time="2025-05-14T23:53:33.034737684Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 14 23:53:33.034779 containerd[1458]: time="2025-05-14T23:53:33.034750589Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 14 23:53:33.034779 containerd[1458]: time="2025-05-14T23:53:33.034764197Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 14 23:53:33.034829 containerd[1458]: time="2025-05-14T23:53:33.034779052Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 14 23:53:33.034829 containerd[1458]: time="2025-05-14T23:53:33.034791880Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 14 23:53:33.034829 containerd[1458]: time="2025-05-14T23:53:33.034804279Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 14 23:53:33.034829 containerd[1458]: time="2025-05-14T23:53:33.034815352Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 14 23:53:33.034892 containerd[1458]: time="2025-05-14T23:53:33.034835588Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 14 23:53:33.034892 containerd[1458]: time="2025-05-14T23:53:33.034855941Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 14 23:53:33.034892 containerd[1458]: time="2025-05-14T23:53:33.034868184Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 14 23:53:33.034892 containerd[1458]: time="2025-05-14T23:53:33.034881168Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 14 23:53:33.034959 containerd[1458]: time="2025-05-14T23:53:33.034892904Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 14 23:53:33.034959 containerd[1458]: time="2025-05-14T23:53:33.034906668Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 14 23:53:33.034959 containerd[1458]: time="2025-05-14T23:53:33.034918170Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 14 23:53:33.034959 containerd[1458]: time="2025-05-14T23:53:33.034930257Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 14 23:53:33.034959 containerd[1458]: time="2025-05-14T23:53:33.034943709Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 May 14 23:53:33.034959 containerd[1458]: time="2025-05-14T23:53:33.034957862Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 May 14 23:53:33.035058 containerd[1458]: time="2025-05-14T23:53:33.034969287Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 14 23:53:33.035058 containerd[1458]: time="2025-05-14T23:53:33.034981257Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 May 14 23:53:33.035058 containerd[1458]: time="2025-05-14T23:53:33.034993227Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 14 23:53:33.035058 containerd[1458]: time="2025-05-14T23:53:33.035007497Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 May 14 23:53:33.035058 containerd[1458]: time="2025-05-14T23:53:33.035028474Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 May 14 23:53:33.035058 containerd[1458]: time="2025-05-14T23:53:33.035040990Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 14 23:53:33.035058 containerd[1458]: time="2025-05-14T23:53:33.035051088Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 14 23:53:33.035515 containerd[1458]: time="2025-05-14T23:53:33.035496280Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 14 23:53:33.035571 containerd[1458]: time="2025-05-14T23:53:33.035523808Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 May 14 23:53:33.035571 containerd[1458]: time="2025-05-14T23:53:33.035548956Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 14 23:53:33.035571 containerd[1458]: time="2025-05-14T23:53:33.035561823Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 May 14 23:53:33.035571 containerd[1458]: time="2025-05-14T23:53:33.035570908Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 14 23:53:33.035638 containerd[1458]: time="2025-05-14T23:53:33.035582410Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 May 14 23:53:33.035638 containerd[1458]: time="2025-05-14T23:53:33.035592899Z" level=info msg="NRI interface is disabled by configuration." May 14 23:53:33.035638 containerd[1458]: time="2025-05-14T23:53:33.035604206Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 14 23:53:33.036118 containerd[1458]: time="2025-05-14T23:53:33.036068113Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 14 23:53:33.036118 containerd[1458]: time="2025-05-14T23:53:33.036120711Z" level=info msg="Connect containerd service" May 14 23:53:33.036264 containerd[1458]: time="2025-05-14T23:53:33.036163991Z" level=info msg="using legacy CRI server" May 14 23:53:33.036264 containerd[1458]: time="2025-05-14T23:53:33.036172763Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 14 23:53:33.036540 containerd[1458]: time="2025-05-14T23:53:33.036513890Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 14 23:53:33.037327 containerd[1458]: time="2025-05-14T23:53:33.037296388Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 14 23:53:33.038201 containerd[1458]: time="2025-05-14T23:53:33.038129768Z" level=info msg="Start subscribing containerd event" May 14 23:53:33.038201 containerd[1458]: time="2025-05-14T23:53:33.038173789Z" level=info msg="Start recovering state" May 14 23:53:33.038262 containerd[1458]: time="2025-05-14T23:53:33.038240618Z" level=info msg="Start event monitor" May 14 23:53:33.038262 containerd[1458]: time="2025-05-14T23:53:33.038251107Z" level=info msg="Start snapshots syncer" May 14 23:53:33.038262 containerd[1458]: time="2025-05-14T23:53:33.038259334Z" level=info msg="Start cni network conf syncer for default" May 14 23:53:33.038324 containerd[1458]: time="2025-05-14T23:53:33.038266625Z" level=info msg="Start streaming server" May 14 23:53:33.039046 containerd[1458]: time="2025-05-14T23:53:33.039024169Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 14 23:53:33.039098 containerd[1458]: time="2025-05-14T23:53:33.039085228Z" level=info msg=serving... address=/run/containerd/containerd.sock May 14 23:53:33.040729 containerd[1458]: time="2025-05-14T23:53:33.040700716Z" level=info msg="containerd successfully booted in 0.042020s" May 14 23:53:33.040797 systemd[1]: Started containerd.service - containerd container runtime. May 14 23:53:33.167169 tar[1456]: linux-arm64/LICENSE May 14 23:53:33.167169 tar[1456]: linux-arm64/README.md May 14 23:53:33.189574 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 14 23:53:33.569091 sshd_keygen[1451]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 14 23:53:33.587287 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 14 23:53:33.599800 systemd[1]: Starting issuegen.service - Generate /run/issue... May 14 23:53:33.604886 systemd[1]: issuegen.service: Deactivated successfully. May 14 23:53:33.605118 systemd[1]: Finished issuegen.service - Generate /run/issue. May 14 23:53:33.607779 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 14 23:53:33.618405 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 14 23:53:33.622818 systemd[1]: Started getty@tty1.service - Getty on tty1. May 14 23:53:33.624948 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. May 14 23:53:33.626264 systemd[1]: Reached target getty.target - Login Prompts. May 14 23:53:34.526651 systemd-networkd[1388]: eth0: Gained IPv6LL May 14 23:53:34.530606 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 14 23:53:34.533050 systemd[1]: Reached target network-online.target - Network is Online. May 14 23:53:34.551191 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... May 14 23:53:34.554017 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 23:53:34.556306 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 14 23:53:34.572120 systemd[1]: coreos-metadata.service: Deactivated successfully. May 14 23:53:34.572375 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. May 14 23:53:34.574743 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 14 23:53:34.582755 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 14 23:53:35.094174 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 23:53:35.095703 systemd[1]: Reached target multi-user.target - Multi-User System. May 14 23:53:35.102106 systemd[1]: Startup finished in 591ms (kernel) + 5.019s (initrd) + 4.403s (userspace) = 10.014s. May 14 23:53:35.103404 (kubelet)[1543]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 23:53:35.678311 kubelet[1543]: E0514 23:53:35.678240 1543 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 23:53:35.681083 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 23:53:35.681239 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 23:53:35.681557 systemd[1]: kubelet.service: Consumed 857ms CPU time, 243.6M memory peak. May 14 23:53:38.303093 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 14 23:53:38.304340 systemd[1]: Started sshd@0-10.0.0.62:22-10.0.0.1:44432.service - OpenSSH per-connection server daemon (10.0.0.1:44432). May 14 23:53:38.406356 sshd[1558]: Accepted publickey for core from 10.0.0.1 port 44432 ssh2: RSA SHA256:smlrS2t7mLsANcNMaj8ka2nbLtrYQtOgzTJzuRdYSOU May 14 23:53:38.406950 sshd-session[1558]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:53:38.412705 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 14 23:53:38.426785 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 14 23:53:38.432439 systemd-logind[1440]: New session 1 of user core. May 14 23:53:38.436548 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 14 23:53:38.447914 systemd[1]: Starting user@500.service - User Manager for UID 500... May 14 23:53:38.450665 (systemd)[1562]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 14 23:53:38.454148 systemd-logind[1440]: New session c1 of user core. May 14 23:53:38.555315 systemd[1562]: Queued start job for default target default.target. May 14 23:53:38.566483 systemd[1562]: Created slice app.slice - User Application Slice. May 14 23:53:38.566671 systemd[1562]: Reached target paths.target - Paths. May 14 23:53:38.566770 systemd[1562]: Reached target timers.target - Timers. May 14 23:53:38.568201 systemd[1562]: Starting dbus.socket - D-Bus User Message Bus Socket... May 14 23:53:38.577667 systemd[1562]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 14 23:53:38.577749 systemd[1562]: Reached target sockets.target - Sockets. May 14 23:53:38.577790 systemd[1562]: Reached target basic.target - Basic System. May 14 23:53:38.577817 systemd[1562]: Reached target default.target - Main User Target. May 14 23:53:38.577842 systemd[1562]: Startup finished in 117ms. May 14 23:53:38.578038 systemd[1]: Started user@500.service - User Manager for UID 500. May 14 23:53:38.579841 systemd[1]: Started session-1.scope - Session 1 of User core. May 14 23:53:38.636952 systemd[1]: Started sshd@1-10.0.0.62:22-10.0.0.1:44444.service - OpenSSH per-connection server daemon (10.0.0.1:44444). May 14 23:53:38.700379 sshd[1573]: Accepted publickey for core from 10.0.0.1 port 44444 ssh2: RSA SHA256:smlrS2t7mLsANcNMaj8ka2nbLtrYQtOgzTJzuRdYSOU May 14 23:53:38.701849 sshd-session[1573]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:53:38.706188 systemd-logind[1440]: New session 2 of user core. May 14 23:53:38.718712 systemd[1]: Started session-2.scope - Session 2 of User core. May 14 23:53:38.769568 sshd[1575]: Connection closed by 10.0.0.1 port 44444 May 14 23:53:38.770039 sshd-session[1573]: pam_unix(sshd:session): session closed for user core May 14 23:53:38.784759 systemd[1]: sshd@1-10.0.0.62:22-10.0.0.1:44444.service: Deactivated successfully. May 14 23:53:38.786388 systemd[1]: session-2.scope: Deactivated successfully. May 14 23:53:38.787106 systemd-logind[1440]: Session 2 logged out. Waiting for processes to exit. May 14 23:53:38.802869 systemd[1]: Started sshd@2-10.0.0.62:22-10.0.0.1:44448.service - OpenSSH per-connection server daemon (10.0.0.1:44448). May 14 23:53:38.803750 systemd-logind[1440]: Removed session 2. May 14 23:53:38.843482 sshd[1580]: Accepted publickey for core from 10.0.0.1 port 44448 ssh2: RSA SHA256:smlrS2t7mLsANcNMaj8ka2nbLtrYQtOgzTJzuRdYSOU May 14 23:53:38.844786 sshd-session[1580]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:53:38.848916 systemd-logind[1440]: New session 3 of user core. May 14 23:53:38.858714 systemd[1]: Started session-3.scope - Session 3 of User core. May 14 23:53:38.905116 sshd[1583]: Connection closed by 10.0.0.1 port 44448 May 14 23:53:38.905399 sshd-session[1580]: pam_unix(sshd:session): session closed for user core May 14 23:53:38.915510 systemd[1]: sshd@2-10.0.0.62:22-10.0.0.1:44448.service: Deactivated successfully. May 14 23:53:38.916936 systemd[1]: session-3.scope: Deactivated successfully. May 14 23:53:38.919002 systemd-logind[1440]: Session 3 logged out. Waiting for processes to exit. May 14 23:53:38.920778 systemd[1]: Started sshd@3-10.0.0.62:22-10.0.0.1:44460.service - OpenSSH per-connection server daemon (10.0.0.1:44460). May 14 23:53:38.922594 systemd-logind[1440]: Removed session 3. May 14 23:53:38.962913 sshd[1588]: Accepted publickey for core from 10.0.0.1 port 44460 ssh2: RSA SHA256:smlrS2t7mLsANcNMaj8ka2nbLtrYQtOgzTJzuRdYSOU May 14 23:53:38.964031 sshd-session[1588]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:53:38.967812 systemd-logind[1440]: New session 4 of user core. May 14 23:53:38.979688 systemd[1]: Started session-4.scope - Session 4 of User core. May 14 23:53:39.033492 sshd[1591]: Connection closed by 10.0.0.1 port 44460 May 14 23:53:39.033729 sshd-session[1588]: pam_unix(sshd:session): session closed for user core May 14 23:53:39.045662 systemd[1]: sshd@3-10.0.0.62:22-10.0.0.1:44460.service: Deactivated successfully. May 14 23:53:39.047157 systemd[1]: session-4.scope: Deactivated successfully. May 14 23:53:39.048430 systemd-logind[1440]: Session 4 logged out. Waiting for processes to exit. May 14 23:53:39.049588 systemd[1]: Started sshd@4-10.0.0.62:22-10.0.0.1:44468.service - OpenSSH per-connection server daemon (10.0.0.1:44468). May 14 23:53:39.050424 systemd-logind[1440]: Removed session 4. May 14 23:53:39.091435 sshd[1596]: Accepted publickey for core from 10.0.0.1 port 44468 ssh2: RSA SHA256:smlrS2t7mLsANcNMaj8ka2nbLtrYQtOgzTJzuRdYSOU May 14 23:53:39.092720 sshd-session[1596]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:53:39.096486 systemd-logind[1440]: New session 5 of user core. May 14 23:53:39.106696 systemd[1]: Started session-5.scope - Session 5 of User core. May 14 23:53:39.166555 sudo[1600]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 14 23:53:39.166837 sudo[1600]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 14 23:53:39.190510 sudo[1600]: pam_unix(sudo:session): session closed for user root May 14 23:53:39.192208 sshd[1599]: Connection closed by 10.0.0.1 port 44468 May 14 23:53:39.193054 sshd-session[1596]: pam_unix(sshd:session): session closed for user core May 14 23:53:39.205093 systemd[1]: sshd@4-10.0.0.62:22-10.0.0.1:44468.service: Deactivated successfully. May 14 23:53:39.207101 systemd[1]: session-5.scope: Deactivated successfully. May 14 23:53:39.209032 systemd-logind[1440]: Session 5 logged out. Waiting for processes to exit. May 14 23:53:39.227993 systemd[1]: Started sshd@5-10.0.0.62:22-10.0.0.1:44472.service - OpenSSH per-connection server daemon (10.0.0.1:44472). May 14 23:53:39.228868 systemd-logind[1440]: Removed session 5. May 14 23:53:39.282479 sshd[1605]: Accepted publickey for core from 10.0.0.1 port 44472 ssh2: RSA SHA256:smlrS2t7mLsANcNMaj8ka2nbLtrYQtOgzTJzuRdYSOU May 14 23:53:39.283817 sshd-session[1605]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:53:39.287926 systemd-logind[1440]: New session 6 of user core. May 14 23:53:39.296671 systemd[1]: Started session-6.scope - Session 6 of User core. May 14 23:53:39.349009 sudo[1610]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 14 23:53:39.349278 sudo[1610]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 14 23:53:39.352226 sudo[1610]: pam_unix(sudo:session): session closed for user root May 14 23:53:39.356419 sudo[1609]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 14 23:53:39.356912 sudo[1609]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 14 23:53:39.373797 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 14 23:53:39.395509 augenrules[1632]: No rules May 14 23:53:39.396748 systemd[1]: audit-rules.service: Deactivated successfully. May 14 23:53:39.396992 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 14 23:53:39.398028 sudo[1609]: pam_unix(sudo:session): session closed for user root May 14 23:53:39.399941 sshd[1608]: Connection closed by 10.0.0.1 port 44472 May 14 23:53:39.399417 sshd-session[1605]: pam_unix(sshd:session): session closed for user core May 14 23:53:39.405629 systemd[1]: sshd@5-10.0.0.62:22-10.0.0.1:44472.service: Deactivated successfully. May 14 23:53:39.406996 systemd[1]: session-6.scope: Deactivated successfully. May 14 23:53:39.408707 systemd-logind[1440]: Session 6 logged out. Waiting for processes to exit. May 14 23:53:39.410864 systemd[1]: Started sshd@6-10.0.0.62:22-10.0.0.1:44478.service - OpenSSH per-connection server daemon (10.0.0.1:44478). May 14 23:53:39.412043 systemd-logind[1440]: Removed session 6. May 14 23:53:39.451454 sshd[1640]: Accepted publickey for core from 10.0.0.1 port 44478 ssh2: RSA SHA256:smlrS2t7mLsANcNMaj8ka2nbLtrYQtOgzTJzuRdYSOU May 14 23:53:39.452618 sshd-session[1640]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:53:39.458758 systemd-logind[1440]: New session 7 of user core. May 14 23:53:39.471151 systemd[1]: Started session-7.scope - Session 7 of User core. May 14 23:53:39.521779 sudo[1644]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 14 23:53:39.522038 sudo[1644]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 14 23:53:39.905810 systemd[1]: Starting docker.service - Docker Application Container Engine... May 14 23:53:39.905925 (dockerd)[1664]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 14 23:53:40.169985 dockerd[1664]: time="2025-05-14T23:53:40.169392298Z" level=info msg="Starting up" May 14 23:53:40.325284 dockerd[1664]: time="2025-05-14T23:53:40.325242982Z" level=info msg="Loading containers: start." May 14 23:53:40.483332 kernel: Initializing XFRM netlink socket May 14 23:53:40.569500 systemd-networkd[1388]: docker0: Link UP May 14 23:53:40.598967 dockerd[1664]: time="2025-05-14T23:53:40.598815619Z" level=info msg="Loading containers: done." May 14 23:53:40.613559 dockerd[1664]: time="2025-05-14T23:53:40.613310193Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 14 23:53:40.613559 dockerd[1664]: time="2025-05-14T23:53:40.613408013Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 May 14 23:53:40.613711 dockerd[1664]: time="2025-05-14T23:53:40.613598942Z" level=info msg="Daemon has completed initialization" May 14 23:53:40.647747 dockerd[1664]: time="2025-05-14T23:53:40.647684944Z" level=info msg="API listen on /run/docker.sock" May 14 23:53:40.648041 systemd[1]: Started docker.service - Docker Application Container Engine. May 14 23:53:41.463791 containerd[1458]: time="2025-05-14T23:53:41.463752192Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\"" May 14 23:53:41.981135 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2589579036.mount: Deactivated successfully. May 14 23:53:43.454993 containerd[1458]: time="2025-05-14T23:53:43.454942003Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:53:43.456665 containerd[1458]: time="2025-05-14T23:53:43.456611523Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.12: active requests=0, bytes read=29794152" May 14 23:53:43.457673 containerd[1458]: time="2025-05-14T23:53:43.457575281Z" level=info msg="ImageCreate event name:\"sha256:afbe230ec4abc2c9e87f7fbe7814bde21dbe30f03252c8861c4ca9510cb43ec6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:53:43.461983 containerd[1458]: time="2025-05-14T23:53:43.461115774Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:53:43.461983 containerd[1458]: time="2025-05-14T23:53:43.461838384Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.12\" with image id \"sha256:afbe230ec4abc2c9e87f7fbe7814bde21dbe30f03252c8861c4ca9510cb43ec6\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\", size \"29790950\" in 1.998046721s" May 14 23:53:43.461983 containerd[1458]: time="2025-05-14T23:53:43.461863139Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\" returns image reference \"sha256:afbe230ec4abc2c9e87f7fbe7814bde21dbe30f03252c8861c4ca9510cb43ec6\"" May 14 23:53:43.480585 containerd[1458]: time="2025-05-14T23:53:43.480551694Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\"" May 14 23:53:45.139490 containerd[1458]: time="2025-05-14T23:53:45.139422766Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:53:45.139924 containerd[1458]: time="2025-05-14T23:53:45.139863673Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.12: active requests=0, bytes read=26855552" May 14 23:53:45.140701 containerd[1458]: time="2025-05-14T23:53:45.140672977Z" level=info msg="ImageCreate event name:\"sha256:3df23260c56ff58d759f8a841c67846184e97ce81a269549ca8d14b36da14c14\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:53:45.143493 containerd[1458]: time="2025-05-14T23:53:45.143443859Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:53:45.144954 containerd[1458]: time="2025-05-14T23:53:45.144599275Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.12\" with image id \"sha256:3df23260c56ff58d759f8a841c67846184e97ce81a269549ca8d14b36da14c14\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\", size \"28297111\" in 1.664010772s" May 14 23:53:45.144954 containerd[1458]: time="2025-05-14T23:53:45.144631231Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\" returns image reference \"sha256:3df23260c56ff58d759f8a841c67846184e97ce81a269549ca8d14b36da14c14\"" May 14 23:53:45.163738 containerd[1458]: time="2025-05-14T23:53:45.163647541Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\"" May 14 23:53:45.838942 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 14 23:53:45.848711 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 23:53:45.943700 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 23:53:45.947521 (kubelet)[1945]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 23:53:46.014697 kubelet[1945]: E0514 23:53:46.014017 1945 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 23:53:46.017454 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 23:53:46.017721 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 23:53:46.017972 systemd[1]: kubelet.service: Consumed 133ms CPU time, 97.2M memory peak. May 14 23:53:46.198667 containerd[1458]: time="2025-05-14T23:53:46.198524021Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:53:46.202178 containerd[1458]: time="2025-05-14T23:53:46.202109838Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.12: active requests=0, bytes read=16263947" May 14 23:53:46.203950 containerd[1458]: time="2025-05-14T23:53:46.203894185Z" level=info msg="ImageCreate event name:\"sha256:fb0f5dac5fa74463b801d11598454c00462609b582d17052195012e5f682c2ba\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:53:46.207412 containerd[1458]: time="2025-05-14T23:53:46.207378138Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:53:46.208617 containerd[1458]: time="2025-05-14T23:53:46.208557508Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.12\" with image id \"sha256:fb0f5dac5fa74463b801d11598454c00462609b582d17052195012e5f682c2ba\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\", size \"17705524\" in 1.044660834s" May 14 23:53:46.208617 containerd[1458]: time="2025-05-14T23:53:46.208592990Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\" returns image reference \"sha256:fb0f5dac5fa74463b801d11598454c00462609b582d17052195012e5f682c2ba\"" May 14 23:53:46.227473 containerd[1458]: time="2025-05-14T23:53:46.227438622Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\"" May 14 23:53:47.181389 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount333003009.mount: Deactivated successfully. May 14 23:53:47.427375 containerd[1458]: time="2025-05-14T23:53:47.427298770Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:53:47.428869 containerd[1458]: time="2025-05-14T23:53:47.428785999Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.12: active requests=0, bytes read=25775707" May 14 23:53:47.429824 containerd[1458]: time="2025-05-14T23:53:47.429660103Z" level=info msg="ImageCreate event name:\"sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:53:47.432677 containerd[1458]: time="2025-05-14T23:53:47.432167585Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:53:47.433248 containerd[1458]: time="2025-05-14T23:53:47.432937496Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.12\" with image id \"sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed\", repo tag \"registry.k8s.io/kube-proxy:v1.30.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\", size \"25774724\" in 1.20546287s" May 14 23:53:47.433248 containerd[1458]: time="2025-05-14T23:53:47.432973157Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\" returns image reference \"sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed\"" May 14 23:53:47.454906 containerd[1458]: time="2025-05-14T23:53:47.454861228Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" May 14 23:53:48.109575 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount29245959.mount: Deactivated successfully. May 14 23:53:48.914439 containerd[1458]: time="2025-05-14T23:53:48.914371390Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:53:48.915577 containerd[1458]: time="2025-05-14T23:53:48.915315404Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485383" May 14 23:53:48.916330 containerd[1458]: time="2025-05-14T23:53:48.916301953Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:53:48.921678 containerd[1458]: time="2025-05-14T23:53:48.921506088Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.466605369s" May 14 23:53:48.921678 containerd[1458]: time="2025-05-14T23:53:48.921562734Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" May 14 23:53:48.923020 containerd[1458]: time="2025-05-14T23:53:48.921847202Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:53:48.942005 containerd[1458]: time="2025-05-14T23:53:48.941964732Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" May 14 23:53:49.385869 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1041830543.mount: Deactivated successfully. May 14 23:53:49.392358 containerd[1458]: time="2025-05-14T23:53:49.392305743Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:53:49.393591 containerd[1458]: time="2025-05-14T23:53:49.393544281Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268823" May 14 23:53:49.394488 containerd[1458]: time="2025-05-14T23:53:49.394456116Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:53:49.397116 containerd[1458]: time="2025-05-14T23:53:49.397072256Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:53:49.397921 containerd[1458]: time="2025-05-14T23:53:49.397884229Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 455.879909ms" May 14 23:53:49.397921 containerd[1458]: time="2025-05-14T23:53:49.397916533Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" May 14 23:53:49.417706 containerd[1458]: time="2025-05-14T23:53:49.417664706Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" May 14 23:53:49.897351 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2905511220.mount: Deactivated successfully. May 14 23:53:51.810915 containerd[1458]: time="2025-05-14T23:53:51.810859647Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:53:51.811368 containerd[1458]: time="2025-05-14T23:53:51.811323426Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=66191474" May 14 23:53:51.812364 containerd[1458]: time="2025-05-14T23:53:51.812337148Z" level=info msg="ImageCreate event name:\"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:53:51.815566 containerd[1458]: time="2025-05-14T23:53:51.815216404Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:53:51.817387 containerd[1458]: time="2025-05-14T23:53:51.817351162Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"66189079\" in 2.399430815s" May 14 23:53:51.817425 containerd[1458]: time="2025-05-14T23:53:51.817390073Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\"" May 14 23:53:56.088924 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 14 23:53:56.098965 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 23:53:56.188010 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 23:53:56.191089 (kubelet)[2166]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 23:53:56.231454 kubelet[2166]: E0514 23:53:56.231386 2166 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 23:53:56.233976 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 23:53:56.234117 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 23:53:56.235746 systemd[1]: kubelet.service: Consumed 122ms CPU time, 97M memory peak. May 14 23:53:57.576057 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 14 23:53:57.576198 systemd[1]: kubelet.service: Consumed 122ms CPU time, 97M memory peak. May 14 23:53:57.587798 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 23:53:57.603861 systemd[1]: Reload requested from client PID 2181 ('systemctl') (unit session-7.scope)... May 14 23:53:57.603881 systemd[1]: Reloading... May 14 23:53:57.676579 zram_generator::config[2231]: No configuration found. May 14 23:53:57.885472 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 23:53:57.959851 systemd[1]: Reloading finished in 355 ms. May 14 23:53:58.003813 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 23:53:58.006793 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 14 23:53:58.007606 systemd[1]: kubelet.service: Deactivated successfully. May 14 23:53:58.008615 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 14 23:53:58.008666 systemd[1]: kubelet.service: Consumed 78ms CPU time, 82.4M memory peak. May 14 23:53:58.010273 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 23:53:58.110923 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 23:53:58.114266 (kubelet)[2272]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 14 23:53:58.152115 kubelet[2272]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 23:53:58.152115 kubelet[2272]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 14 23:53:58.152115 kubelet[2272]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 23:53:58.153779 kubelet[2272]: I0514 23:53:58.152426 2272 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 14 23:53:58.569685 kubelet[2272]: I0514 23:53:58.569586 2272 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 14 23:53:58.569812 kubelet[2272]: I0514 23:53:58.569795 2272 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 14 23:53:58.570076 kubelet[2272]: I0514 23:53:58.570059 2272 server.go:927] "Client rotation is on, will bootstrap in background" May 14 23:53:58.609720 kubelet[2272]: I0514 23:53:58.609681 2272 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 14 23:53:58.609720 kubelet[2272]: E0514 23:53:58.609700 2272 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.62:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.62:6443: connect: connection refused May 14 23:53:58.621407 kubelet[2272]: I0514 23:53:58.621382 2272 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 14 23:53:58.622499 kubelet[2272]: I0514 23:53:58.622447 2272 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 14 23:53:58.622676 kubelet[2272]: I0514 23:53:58.622491 2272 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 14 23:53:58.622771 kubelet[2272]: I0514 23:53:58.622733 2272 topology_manager.go:138] "Creating topology manager with none policy" May 14 23:53:58.622771 kubelet[2272]: I0514 23:53:58.622743 2272 container_manager_linux.go:301] "Creating device plugin manager" May 14 23:53:58.623013 kubelet[2272]: I0514 23:53:58.622990 2272 state_mem.go:36] "Initialized new in-memory state store" May 14 23:53:58.623934 kubelet[2272]: I0514 23:53:58.623913 2272 kubelet.go:400] "Attempting to sync node with API server" May 14 23:53:58.623973 kubelet[2272]: I0514 23:53:58.623938 2272 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 14 23:53:58.624241 kubelet[2272]: I0514 23:53:58.624063 2272 kubelet.go:312] "Adding apiserver pod source" May 14 23:53:58.624241 kubelet[2272]: I0514 23:53:58.624075 2272 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 14 23:53:58.624672 kubelet[2272]: W0514 23:53:58.624608 2272 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.62:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.62:6443: connect: connection refused May 14 23:53:58.624722 kubelet[2272]: E0514 23:53:58.624680 2272 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.62:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.62:6443: connect: connection refused May 14 23:53:58.624814 kubelet[2272]: W0514 23:53:58.624650 2272 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.62:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.62:6443: connect: connection refused May 14 23:53:58.625083 kubelet[2272]: E0514 23:53:58.625059 2272 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.62:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.62:6443: connect: connection refused May 14 23:53:58.625246 kubelet[2272]: I0514 23:53:58.625224 2272 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" May 14 23:53:58.625665 kubelet[2272]: I0514 23:53:58.625635 2272 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 14 23:53:58.625712 kubelet[2272]: W0514 23:53:58.625688 2272 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 14 23:53:58.626514 kubelet[2272]: I0514 23:53:58.626490 2272 server.go:1264] "Started kubelet" May 14 23:53:58.629305 kubelet[2272]: I0514 23:53:58.626830 2272 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 14 23:53:58.629305 kubelet[2272]: I0514 23:53:58.627132 2272 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 14 23:53:58.629305 kubelet[2272]: I0514 23:53:58.627166 2272 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 14 23:53:58.629305 kubelet[2272]: I0514 23:53:58.627674 2272 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 14 23:53:58.629305 kubelet[2272]: I0514 23:53:58.628196 2272 server.go:455] "Adding debug handlers to kubelet server" May 14 23:53:58.631749 kubelet[2272]: E0514 23:53:58.631569 2272 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.62:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.62:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183f89eeac268c08 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-14 23:53:58.62647092 +0000 UTC m=+0.509296909,LastTimestamp:2025-05-14 23:53:58.62647092 +0000 UTC m=+0.509296909,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 14 23:53:58.631909 kubelet[2272]: E0514 23:53:58.631891 2272 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 23:53:58.632042 kubelet[2272]: I0514 23:53:58.632031 2272 volume_manager.go:291] "Starting Kubelet Volume Manager" May 14 23:53:58.632612 kubelet[2272]: I0514 23:53:58.632592 2272 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 14 23:53:58.632858 kubelet[2272]: I0514 23:53:58.632844 2272 reconciler.go:26] "Reconciler: start to sync state" May 14 23:53:58.633165 kubelet[2272]: E0514 23:53:58.632599 2272 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.62:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.62:6443: connect: connection refused" interval="200ms" May 14 23:53:58.633325 kubelet[2272]: W0514 23:53:58.633291 2272 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.62:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.62:6443: connect: connection refused May 14 23:53:58.633407 kubelet[2272]: E0514 23:53:58.633394 2272 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.62:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.62:6443: connect: connection refused May 14 23:53:58.634721 kubelet[2272]: I0514 23:53:58.634702 2272 factory.go:221] Registration of the containerd container factory successfully May 14 23:53:58.634821 kubelet[2272]: I0514 23:53:58.634810 2272 factory.go:221] Registration of the systemd container factory successfully May 14 23:53:58.634929 kubelet[2272]: I0514 23:53:58.634913 2272 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 14 23:53:58.639433 kubelet[2272]: E0514 23:53:58.639403 2272 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 14 23:53:58.643056 kubelet[2272]: I0514 23:53:58.643010 2272 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 14 23:53:58.643999 kubelet[2272]: I0514 23:53:58.643969 2272 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 14 23:53:58.644063 kubelet[2272]: I0514 23:53:58.644007 2272 status_manager.go:217] "Starting to sync pod status with apiserver" May 14 23:53:58.644063 kubelet[2272]: I0514 23:53:58.644029 2272 kubelet.go:2337] "Starting kubelet main sync loop" May 14 23:53:58.644109 kubelet[2272]: E0514 23:53:58.644081 2272 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 14 23:53:58.646653 kubelet[2272]: W0514 23:53:58.646613 2272 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.62:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.62:6443: connect: connection refused May 14 23:53:58.646736 kubelet[2272]: E0514 23:53:58.646663 2272 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.62:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.62:6443: connect: connection refused May 14 23:53:58.647904 kubelet[2272]: I0514 23:53:58.647881 2272 cpu_manager.go:214] "Starting CPU manager" policy="none" May 14 23:53:58.647904 kubelet[2272]: I0514 23:53:58.647901 2272 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 14 23:53:58.648001 kubelet[2272]: I0514 23:53:58.647918 2272 state_mem.go:36] "Initialized new in-memory state store" May 14 23:53:58.711840 kubelet[2272]: I0514 23:53:58.711792 2272 policy_none.go:49] "None policy: Start" May 14 23:53:58.712742 kubelet[2272]: I0514 23:53:58.712722 2272 memory_manager.go:170] "Starting memorymanager" policy="None" May 14 23:53:58.712786 kubelet[2272]: I0514 23:53:58.712751 2272 state_mem.go:35] "Initializing new in-memory state store" May 14 23:53:58.717855 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 14 23:53:58.733247 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 14 23:53:58.734086 kubelet[2272]: I0514 23:53:58.734018 2272 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 14 23:53:58.734365 kubelet[2272]: E0514 23:53:58.734325 2272 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.62:6443/api/v1/nodes\": dial tcp 10.0.0.62:6443: connect: connection refused" node="localhost" May 14 23:53:58.744672 kubelet[2272]: E0514 23:53:58.744634 2272 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 14 23:53:58.747942 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 14 23:53:58.749429 kubelet[2272]: I0514 23:53:58.749399 2272 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 14 23:53:58.749711 kubelet[2272]: I0514 23:53:58.749641 2272 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 14 23:53:58.749759 kubelet[2272]: I0514 23:53:58.749746 2272 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 14 23:53:58.753102 kubelet[2272]: E0514 23:53:58.753065 2272 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 14 23:53:58.833723 kubelet[2272]: E0514 23:53:58.833613 2272 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.62:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.62:6443: connect: connection refused" interval="400ms" May 14 23:53:58.935983 kubelet[2272]: I0514 23:53:58.935947 2272 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 14 23:53:58.936400 kubelet[2272]: E0514 23:53:58.936373 2272 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.62:6443/api/v1/nodes\": dial tcp 10.0.0.62:6443: connect: connection refused" node="localhost" May 14 23:53:58.945654 kubelet[2272]: I0514 23:53:58.945543 2272 topology_manager.go:215] "Topology Admit Handler" podUID="8dedc9c23fa9318bd27f40b87a238e9f" podNamespace="kube-system" podName="kube-apiserver-localhost" May 14 23:53:58.946650 kubelet[2272]: I0514 23:53:58.946595 2272 topology_manager.go:215] "Topology Admit Handler" podUID="b20b39a8540dba87b5883a6f0f602dba" podNamespace="kube-system" podName="kube-controller-manager-localhost" May 14 23:53:58.947356 kubelet[2272]: I0514 23:53:58.947328 2272 topology_manager.go:215] "Topology Admit Handler" podUID="6ece95f10dbffa04b25ec3439a115512" podNamespace="kube-system" podName="kube-scheduler-localhost" May 14 23:53:58.954689 systemd[1]: Created slice kubepods-burstable-pod8dedc9c23fa9318bd27f40b87a238e9f.slice - libcontainer container kubepods-burstable-pod8dedc9c23fa9318bd27f40b87a238e9f.slice. May 14 23:53:58.971320 systemd[1]: Created slice kubepods-burstable-podb20b39a8540dba87b5883a6f0f602dba.slice - libcontainer container kubepods-burstable-podb20b39a8540dba87b5883a6f0f602dba.slice. May 14 23:53:58.991485 systemd[1]: Created slice kubepods-burstable-pod6ece95f10dbffa04b25ec3439a115512.slice - libcontainer container kubepods-burstable-pod6ece95f10dbffa04b25ec3439a115512.slice. May 14 23:53:59.034457 kubelet[2272]: I0514 23:53:59.034359 2272 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6ece95f10dbffa04b25ec3439a115512-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6ece95f10dbffa04b25ec3439a115512\") " pod="kube-system/kube-scheduler-localhost" May 14 23:53:59.034457 kubelet[2272]: I0514 23:53:59.034398 2272 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8dedc9c23fa9318bd27f40b87a238e9f-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"8dedc9c23fa9318bd27f40b87a238e9f\") " pod="kube-system/kube-apiserver-localhost" May 14 23:53:59.034457 kubelet[2272]: I0514 23:53:59.034419 2272 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 14 23:53:59.034457 kubelet[2272]: I0514 23:53:59.034442 2272 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8dedc9c23fa9318bd27f40b87a238e9f-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"8dedc9c23fa9318bd27f40b87a238e9f\") " pod="kube-system/kube-apiserver-localhost" May 14 23:53:59.034457 kubelet[2272]: I0514 23:53:59.034459 2272 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8dedc9c23fa9318bd27f40b87a238e9f-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"8dedc9c23fa9318bd27f40b87a238e9f\") " pod="kube-system/kube-apiserver-localhost" May 14 23:53:59.034691 kubelet[2272]: I0514 23:53:59.034479 2272 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 14 23:53:59.034691 kubelet[2272]: I0514 23:53:59.034497 2272 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 14 23:53:59.034691 kubelet[2272]: I0514 23:53:59.034515 2272 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 14 23:53:59.034691 kubelet[2272]: I0514 23:53:59.034546 2272 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 14 23:53:59.234458 kubelet[2272]: E0514 23:53:59.234410 2272 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.62:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.62:6443: connect: connection refused" interval="800ms" May 14 23:53:59.270070 containerd[1458]: time="2025-05-14T23:53:59.270030937Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:8dedc9c23fa9318bd27f40b87a238e9f,Namespace:kube-system,Attempt:0,}" May 14 23:53:59.290391 containerd[1458]: time="2025-05-14T23:53:59.290308307Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b20b39a8540dba87b5883a6f0f602dba,Namespace:kube-system,Attempt:0,}" May 14 23:53:59.294508 containerd[1458]: time="2025-05-14T23:53:59.294265082Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6ece95f10dbffa04b25ec3439a115512,Namespace:kube-system,Attempt:0,}" May 14 23:53:59.338453 kubelet[2272]: I0514 23:53:59.338420 2272 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 14 23:53:59.338841 kubelet[2272]: E0514 23:53:59.338815 2272 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.62:6443/api/v1/nodes\": dial tcp 10.0.0.62:6443: connect: connection refused" node="localhost" May 14 23:53:59.621043 kubelet[2272]: W0514 23:53:59.620461 2272 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.62:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.62:6443: connect: connection refused May 14 23:53:59.621043 kubelet[2272]: E0514 23:53:59.620561 2272 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.62:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.62:6443: connect: connection refused May 14 23:53:59.845191 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1676545040.mount: Deactivated successfully. May 14 23:53:59.855668 containerd[1458]: time="2025-05-14T23:53:59.855604455Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 14 23:53:59.858731 containerd[1458]: time="2025-05-14T23:53:59.858685158Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 14 23:53:59.859797 containerd[1458]: time="2025-05-14T23:53:59.859760154Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 14 23:53:59.861195 containerd[1458]: time="2025-05-14T23:53:59.861142789Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 14 23:53:59.864583 containerd[1458]: time="2025-05-14T23:53:59.863374039Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 14 23:53:59.864583 containerd[1458]: time="2025-05-14T23:53:59.863423040Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 14 23:53:59.864583 containerd[1458]: time="2025-05-14T23:53:59.864129326Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" May 14 23:53:59.866321 containerd[1458]: time="2025-05-14T23:53:59.866271086Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 14 23:53:59.868656 containerd[1458]: time="2025-05-14T23:53:59.868626118Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 578.229401ms" May 14 23:53:59.871605 containerd[1458]: time="2025-05-14T23:53:59.870923435Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 600.804888ms" May 14 23:53:59.880331 containerd[1458]: time="2025-05-14T23:53:59.880268903Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 585.904378ms" May 14 23:53:59.897371 kubelet[2272]: W0514 23:53:59.897313 2272 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.62:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.62:6443: connect: connection refused May 14 23:53:59.897875 kubelet[2272]: E0514 23:53:59.897785 2272 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.62:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.62:6443: connect: connection refused May 14 23:54:00.034817 containerd[1458]: time="2025-05-14T23:54:00.034666158Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 14 23:54:00.035033 containerd[1458]: time="2025-05-14T23:54:00.034992334Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 14 23:54:00.035033 containerd[1458]: time="2025-05-14T23:54:00.035013800Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 23:54:00.035165 containerd[1458]: time="2025-05-14T23:54:00.035102139Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 23:54:00.035475 containerd[1458]: time="2025-05-14T23:54:00.035354446Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 14 23:54:00.035530 kubelet[2272]: E0514 23:54:00.035446 2272 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.62:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.62:6443: connect: connection refused" interval="1.6s" May 14 23:54:00.035609 containerd[1458]: time="2025-05-14T23:54:00.035505822Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 14 23:54:00.035609 containerd[1458]: time="2025-05-14T23:54:00.035519772Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 23:54:00.035850 containerd[1458]: time="2025-05-14T23:54:00.035718436Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 23:54:00.039938 containerd[1458]: time="2025-05-14T23:54:00.039683434Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 14 23:54:00.039938 containerd[1458]: time="2025-05-14T23:54:00.039731441Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 14 23:54:00.039938 containerd[1458]: time="2025-05-14T23:54:00.039742593Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 23:54:00.039938 containerd[1458]: time="2025-05-14T23:54:00.039805071Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 23:54:00.053735 systemd[1]: Started cri-containerd-ea35bcaf892d1c8ad0d224ad4180f59d9013b5329ce5b5f0897dddc150c51913.scope - libcontainer container ea35bcaf892d1c8ad0d224ad4180f59d9013b5329ce5b5f0897dddc150c51913. May 14 23:54:00.057633 systemd[1]: Started cri-containerd-c6434e25325e65c49afd4cd3d91d1495ee2a8a063c4c86dd691f648a52be30c0.scope - libcontainer container c6434e25325e65c49afd4cd3d91d1495ee2a8a063c4c86dd691f648a52be30c0. May 14 23:54:00.058997 systemd[1]: Started cri-containerd-efd1531cb90bfa42e8fd967ced441791ad529244103f561e195d371eae8bf3e5.scope - libcontainer container efd1531cb90bfa42e8fd967ced441791ad529244103f561e195d371eae8bf3e5. May 14 23:54:00.098942 containerd[1458]: time="2025-05-14T23:54:00.098731860Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:8dedc9c23fa9318bd27f40b87a238e9f,Namespace:kube-system,Attempt:0,} returns sandbox id \"efd1531cb90bfa42e8fd967ced441791ad529244103f561e195d371eae8bf3e5\"" May 14 23:54:00.104623 containerd[1458]: time="2025-05-14T23:54:00.104584722Z" level=info msg="CreateContainer within sandbox \"efd1531cb90bfa42e8fd967ced441791ad529244103f561e195d371eae8bf3e5\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 14 23:54:00.106159 containerd[1458]: time="2025-05-14T23:54:00.106130181Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b20b39a8540dba87b5883a6f0f602dba,Namespace:kube-system,Attempt:0,} returns sandbox id \"c6434e25325e65c49afd4cd3d91d1495ee2a8a063c4c86dd691f648a52be30c0\"" May 14 23:54:00.109452 containerd[1458]: time="2025-05-14T23:54:00.109374154Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6ece95f10dbffa04b25ec3439a115512,Namespace:kube-system,Attempt:0,} returns sandbox id \"ea35bcaf892d1c8ad0d224ad4180f59d9013b5329ce5b5f0897dddc150c51913\"" May 14 23:54:00.111221 containerd[1458]: time="2025-05-14T23:54:00.111195464Z" level=info msg="CreateContainer within sandbox \"c6434e25325e65c49afd4cd3d91d1495ee2a8a063c4c86dd691f648a52be30c0\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 14 23:54:00.112809 containerd[1458]: time="2025-05-14T23:54:00.112701710Z" level=info msg="CreateContainer within sandbox \"ea35bcaf892d1c8ad0d224ad4180f59d9013b5329ce5b5f0897dddc150c51913\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 14 23:54:00.114706 kubelet[2272]: W0514 23:54:00.114667 2272 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.62:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.62:6443: connect: connection refused May 14 23:54:00.114706 kubelet[2272]: E0514 23:54:00.114709 2272 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.62:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.62:6443: connect: connection refused May 14 23:54:00.127421 containerd[1458]: time="2025-05-14T23:54:00.127333146Z" level=info msg="CreateContainer within sandbox \"c6434e25325e65c49afd4cd3d91d1495ee2a8a063c4c86dd691f648a52be30c0\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"a2296b91e77f7d95739543360d2c6ef941597f4f4db256db8b2379baad7d630d\"" May 14 23:54:00.128146 containerd[1458]: time="2025-05-14T23:54:00.128106935Z" level=info msg="StartContainer for \"a2296b91e77f7d95739543360d2c6ef941597f4f4db256db8b2379baad7d630d\"" May 14 23:54:00.129995 containerd[1458]: time="2025-05-14T23:54:00.129712713Z" level=info msg="CreateContainer within sandbox \"efd1531cb90bfa42e8fd967ced441791ad529244103f561e195d371eae8bf3e5\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"93525e7964d3b17287679016aad8a05465d9a752c4fb769a9764e81e971afc90\"" May 14 23:54:00.130086 containerd[1458]: time="2025-05-14T23:54:00.130058635Z" level=info msg="StartContainer for \"93525e7964d3b17287679016aad8a05465d9a752c4fb769a9764e81e971afc90\"" May 14 23:54:00.139874 containerd[1458]: time="2025-05-14T23:54:00.139828608Z" level=info msg="CreateContainer within sandbox \"ea35bcaf892d1c8ad0d224ad4180f59d9013b5329ce5b5f0897dddc150c51913\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"d6489b759e2b5272d647b00ba835f7c61941ff30bfaef4a95b147719857a60a5\"" May 14 23:54:00.140948 containerd[1458]: time="2025-05-14T23:54:00.140915862Z" level=info msg="StartContainer for \"d6489b759e2b5272d647b00ba835f7c61941ff30bfaef4a95b147719857a60a5\"" May 14 23:54:00.141257 kubelet[2272]: I0514 23:54:00.141230 2272 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 14 23:54:00.141652 kubelet[2272]: E0514 23:54:00.141612 2272 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.62:6443/api/v1/nodes\": dial tcp 10.0.0.62:6443: connect: connection refused" node="localhost" May 14 23:54:00.158718 systemd[1]: Started cri-containerd-a2296b91e77f7d95739543360d2c6ef941597f4f4db256db8b2379baad7d630d.scope - libcontainer container a2296b91e77f7d95739543360d2c6ef941597f4f4db256db8b2379baad7d630d. May 14 23:54:00.162014 systemd[1]: Started cri-containerd-93525e7964d3b17287679016aad8a05465d9a752c4fb769a9764e81e971afc90.scope - libcontainer container 93525e7964d3b17287679016aad8a05465d9a752c4fb769a9764e81e971afc90. May 14 23:54:00.169642 systemd[1]: Started cri-containerd-d6489b759e2b5272d647b00ba835f7c61941ff30bfaef4a95b147719857a60a5.scope - libcontainer container d6489b759e2b5272d647b00ba835f7c61941ff30bfaef4a95b147719857a60a5. May 14 23:54:00.185580 kubelet[2272]: W0514 23:54:00.185446 2272 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.62:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.62:6443: connect: connection refused May 14 23:54:00.185580 kubelet[2272]: E0514 23:54:00.185574 2272 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.62:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.62:6443: connect: connection refused May 14 23:54:00.249791 containerd[1458]: time="2025-05-14T23:54:00.249380645Z" level=info msg="StartContainer for \"d6489b759e2b5272d647b00ba835f7c61941ff30bfaef4a95b147719857a60a5\" returns successfully" May 14 23:54:00.249791 containerd[1458]: time="2025-05-14T23:54:00.249581628Z" level=info msg="StartContainer for \"a2296b91e77f7d95739543360d2c6ef941597f4f4db256db8b2379baad7d630d\" returns successfully" May 14 23:54:00.249791 containerd[1458]: time="2025-05-14T23:54:00.249614845Z" level=info msg="StartContainer for \"93525e7964d3b17287679016aad8a05465d9a752c4fb769a9764e81e971afc90\" returns successfully" May 14 23:54:01.743714 kubelet[2272]: I0514 23:54:01.743666 2272 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 14 23:54:01.980203 kubelet[2272]: E0514 23:54:01.980111 2272 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" May 14 23:54:02.067018 kubelet[2272]: I0514 23:54:02.066899 2272 kubelet_node_status.go:76] "Successfully registered node" node="localhost" May 14 23:54:02.083514 kubelet[2272]: E0514 23:54:02.083470 2272 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 23:54:02.184224 kubelet[2272]: E0514 23:54:02.184176 2272 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 23:54:02.284855 kubelet[2272]: E0514 23:54:02.284792 2272 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 23:54:02.385323 kubelet[2272]: E0514 23:54:02.385275 2272 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 23:54:02.486228 kubelet[2272]: E0514 23:54:02.486189 2272 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 23:54:02.586716 kubelet[2272]: E0514 23:54:02.586684 2272 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 23:54:02.687089 kubelet[2272]: E0514 23:54:02.686996 2272 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 23:54:03.625743 kubelet[2272]: I0514 23:54:03.625701 2272 apiserver.go:52] "Watching apiserver" May 14 23:54:03.633209 kubelet[2272]: I0514 23:54:03.633177 2272 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 14 23:54:04.069119 systemd[1]: Reload requested from client PID 2550 ('systemctl') (unit session-7.scope)... May 14 23:54:04.069133 systemd[1]: Reloading... May 14 23:54:04.141582 zram_generator::config[2594]: No configuration found. May 14 23:54:04.227983 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 23:54:04.313949 systemd[1]: Reloading finished in 244 ms. May 14 23:54:04.334990 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 14 23:54:04.341822 systemd[1]: kubelet.service: Deactivated successfully. May 14 23:54:04.342044 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 14 23:54:04.342102 systemd[1]: kubelet.service: Consumed 871ms CPU time, 115.1M memory peak. May 14 23:54:04.350812 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 23:54:04.445651 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 23:54:04.449632 (kubelet)[2636]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 14 23:54:04.490377 kubelet[2636]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 23:54:04.490377 kubelet[2636]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 14 23:54:04.490377 kubelet[2636]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 23:54:04.490719 kubelet[2636]: I0514 23:54:04.490436 2636 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 14 23:54:04.494895 kubelet[2636]: I0514 23:54:04.494871 2636 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 14 23:54:04.494895 kubelet[2636]: I0514 23:54:04.494893 2636 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 14 23:54:04.495081 kubelet[2636]: I0514 23:54:04.495065 2636 server.go:927] "Client rotation is on, will bootstrap in background" May 14 23:54:04.496358 kubelet[2636]: I0514 23:54:04.496335 2636 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 14 23:54:04.497436 kubelet[2636]: I0514 23:54:04.497414 2636 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 14 23:54:04.502553 kubelet[2636]: I0514 23:54:04.502321 2636 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 14 23:54:04.502553 kubelet[2636]: I0514 23:54:04.502494 2636 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 14 23:54:04.502847 kubelet[2636]: I0514 23:54:04.502516 2636 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 14 23:54:04.502937 kubelet[2636]: I0514 23:54:04.502860 2636 topology_manager.go:138] "Creating topology manager with none policy" May 14 23:54:04.502937 kubelet[2636]: I0514 23:54:04.502870 2636 container_manager_linux.go:301] "Creating device plugin manager" May 14 23:54:04.502937 kubelet[2636]: I0514 23:54:04.502905 2636 state_mem.go:36] "Initialized new in-memory state store" May 14 23:54:04.503011 kubelet[2636]: I0514 23:54:04.502997 2636 kubelet.go:400] "Attempting to sync node with API server" May 14 23:54:04.503011 kubelet[2636]: I0514 23:54:04.503009 2636 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 14 23:54:04.503063 kubelet[2636]: I0514 23:54:04.503042 2636 kubelet.go:312] "Adding apiserver pod source" May 14 23:54:04.503063 kubelet[2636]: I0514 23:54:04.503060 2636 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 14 23:54:04.503806 kubelet[2636]: I0514 23:54:04.503775 2636 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" May 14 23:54:04.506586 kubelet[2636]: I0514 23:54:04.506555 2636 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 14 23:54:04.507587 kubelet[2636]: I0514 23:54:04.507561 2636 server.go:1264] "Started kubelet" May 14 23:54:04.508606 kubelet[2636]: I0514 23:54:04.507733 2636 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 14 23:54:04.508606 kubelet[2636]: I0514 23:54:04.507852 2636 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 14 23:54:04.508606 kubelet[2636]: I0514 23:54:04.508089 2636 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 14 23:54:04.509757 kubelet[2636]: I0514 23:54:04.509733 2636 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 14 23:54:04.510156 kubelet[2636]: I0514 23:54:04.510134 2636 server.go:455] "Adding debug handlers to kubelet server" May 14 23:54:04.511881 kubelet[2636]: I0514 23:54:04.511854 2636 volume_manager.go:291] "Starting Kubelet Volume Manager" May 14 23:54:04.512343 kubelet[2636]: I0514 23:54:04.511984 2636 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 14 23:54:04.512343 kubelet[2636]: I0514 23:54:04.512151 2636 reconciler.go:26] "Reconciler: start to sync state" May 14 23:54:04.526139 kubelet[2636]: I0514 23:54:04.526105 2636 factory.go:221] Registration of the containerd container factory successfully May 14 23:54:04.526139 kubelet[2636]: I0514 23:54:04.526130 2636 factory.go:221] Registration of the systemd container factory successfully May 14 23:54:04.526246 kubelet[2636]: I0514 23:54:04.526204 2636 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 14 23:54:04.529441 kubelet[2636]: I0514 23:54:04.529020 2636 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 14 23:54:04.530051 kubelet[2636]: I0514 23:54:04.529966 2636 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 14 23:54:04.530178 kubelet[2636]: I0514 23:54:04.530165 2636 status_manager.go:217] "Starting to sync pod status with apiserver" May 14 23:54:04.530236 kubelet[2636]: I0514 23:54:04.530227 2636 kubelet.go:2337] "Starting kubelet main sync loop" May 14 23:54:04.530358 kubelet[2636]: E0514 23:54:04.530337 2636 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 14 23:54:04.579256 kubelet[2636]: I0514 23:54:04.578905 2636 cpu_manager.go:214] "Starting CPU manager" policy="none" May 14 23:54:04.579256 kubelet[2636]: I0514 23:54:04.578928 2636 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 14 23:54:04.579256 kubelet[2636]: I0514 23:54:04.578949 2636 state_mem.go:36] "Initialized new in-memory state store" May 14 23:54:04.579256 kubelet[2636]: I0514 23:54:04.579100 2636 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 14 23:54:04.579256 kubelet[2636]: I0514 23:54:04.579111 2636 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 14 23:54:04.579256 kubelet[2636]: I0514 23:54:04.579127 2636 policy_none.go:49] "None policy: Start" May 14 23:54:04.580005 kubelet[2636]: I0514 23:54:04.579635 2636 memory_manager.go:170] "Starting memorymanager" policy="None" May 14 23:54:04.580005 kubelet[2636]: I0514 23:54:04.579662 2636 state_mem.go:35] "Initializing new in-memory state store" May 14 23:54:04.580005 kubelet[2636]: I0514 23:54:04.579819 2636 state_mem.go:75] "Updated machine memory state" May 14 23:54:04.583987 kubelet[2636]: I0514 23:54:04.583963 2636 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 14 23:54:04.584171 kubelet[2636]: I0514 23:54:04.584127 2636 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 14 23:54:04.584339 kubelet[2636]: I0514 23:54:04.584233 2636 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 14 23:54:04.615670 kubelet[2636]: I0514 23:54:04.615638 2636 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 14 23:54:04.621224 kubelet[2636]: I0514 23:54:04.621198 2636 kubelet_node_status.go:112] "Node was previously registered" node="localhost" May 14 23:54:04.621295 kubelet[2636]: I0514 23:54:04.621269 2636 kubelet_node_status.go:76] "Successfully registered node" node="localhost" May 14 23:54:04.630983 kubelet[2636]: I0514 23:54:04.630937 2636 topology_manager.go:215] "Topology Admit Handler" podUID="b20b39a8540dba87b5883a6f0f602dba" podNamespace="kube-system" podName="kube-controller-manager-localhost" May 14 23:54:04.631092 kubelet[2636]: I0514 23:54:04.631070 2636 topology_manager.go:215] "Topology Admit Handler" podUID="6ece95f10dbffa04b25ec3439a115512" podNamespace="kube-system" podName="kube-scheduler-localhost" May 14 23:54:04.631819 kubelet[2636]: I0514 23:54:04.631786 2636 topology_manager.go:215] "Topology Admit Handler" podUID="8dedc9c23fa9318bd27f40b87a238e9f" podNamespace="kube-system" podName="kube-apiserver-localhost" May 14 23:54:04.638134 kubelet[2636]: E0514 23:54:04.638063 2636 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" May 14 23:54:04.713260 kubelet[2636]: I0514 23:54:04.713223 2636 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 14 23:54:04.713260 kubelet[2636]: I0514 23:54:04.713260 2636 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 14 23:54:04.713372 kubelet[2636]: I0514 23:54:04.713283 2636 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8dedc9c23fa9318bd27f40b87a238e9f-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"8dedc9c23fa9318bd27f40b87a238e9f\") " pod="kube-system/kube-apiserver-localhost" May 14 23:54:04.713372 kubelet[2636]: I0514 23:54:04.713305 2636 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 14 23:54:04.713372 kubelet[2636]: I0514 23:54:04.713320 2636 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 14 23:54:04.713372 kubelet[2636]: I0514 23:54:04.713336 2636 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 14 23:54:04.713372 kubelet[2636]: I0514 23:54:04.713352 2636 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6ece95f10dbffa04b25ec3439a115512-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6ece95f10dbffa04b25ec3439a115512\") " pod="kube-system/kube-scheduler-localhost" May 14 23:54:04.713482 kubelet[2636]: I0514 23:54:04.713365 2636 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8dedc9c23fa9318bd27f40b87a238e9f-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"8dedc9c23fa9318bd27f40b87a238e9f\") " pod="kube-system/kube-apiserver-localhost" May 14 23:54:04.713482 kubelet[2636]: I0514 23:54:04.713388 2636 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8dedc9c23fa9318bd27f40b87a238e9f-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"8dedc9c23fa9318bd27f40b87a238e9f\") " pod="kube-system/kube-apiserver-localhost" May 14 23:54:05.503589 kubelet[2636]: I0514 23:54:05.503546 2636 apiserver.go:52] "Watching apiserver" May 14 23:54:05.512635 kubelet[2636]: I0514 23:54:05.512602 2636 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 14 23:54:05.560331 kubelet[2636]: E0514 23:54:05.560267 2636 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" May 14 23:54:05.561042 kubelet[2636]: E0514 23:54:05.560302 2636 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" May 14 23:54:05.589744 kubelet[2636]: I0514 23:54:05.589681 2636 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.589662502 podStartE2EDuration="1.589662502s" podCreationTimestamp="2025-05-14 23:54:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 23:54:05.580261671 +0000 UTC m=+1.127570596" watchObservedRunningTime="2025-05-14 23:54:05.589662502 +0000 UTC m=+1.136971466" May 14 23:54:05.599253 kubelet[2636]: I0514 23:54:05.599196 2636 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.59917852 podStartE2EDuration="2.59917852s" podCreationTimestamp="2025-05-14 23:54:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 23:54:05.590011904 +0000 UTC m=+1.137320869" watchObservedRunningTime="2025-05-14 23:54:05.59917852 +0000 UTC m=+1.146487485" May 14 23:54:05.614186 kubelet[2636]: I0514 23:54:05.613834 2636 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.6138167879999998 podStartE2EDuration="1.613816788s" podCreationTimestamp="2025-05-14 23:54:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 23:54:05.601104273 +0000 UTC m=+1.148413238" watchObservedRunningTime="2025-05-14 23:54:05.613816788 +0000 UTC m=+1.161125753" May 14 23:54:09.304161 sudo[1644]: pam_unix(sudo:session): session closed for user root May 14 23:54:09.313152 sshd[1643]: Connection closed by 10.0.0.1 port 44478 May 14 23:54:09.313658 sshd-session[1640]: pam_unix(sshd:session): session closed for user core May 14 23:54:09.318052 systemd-logind[1440]: Session 7 logged out. Waiting for processes to exit. May 14 23:54:09.318204 systemd[1]: sshd@6-10.0.0.62:22-10.0.0.1:44478.service: Deactivated successfully. May 14 23:54:09.319999 systemd[1]: session-7.scope: Deactivated successfully. May 14 23:54:09.320177 systemd[1]: session-7.scope: Consumed 7.941s CPU time, 249.5M memory peak. May 14 23:54:09.321400 systemd-logind[1440]: Removed session 7. May 14 23:54:17.938881 kubelet[2636]: I0514 23:54:17.938836 2636 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 14 23:54:17.950666 containerd[1458]: time="2025-05-14T23:54:17.950580506Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 14 23:54:17.951308 kubelet[2636]: I0514 23:54:17.951266 2636 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 14 23:54:18.089771 update_engine[1445]: I20250514 23:54:18.089687 1445 update_attempter.cc:509] Updating boot flags... May 14 23:54:18.115559 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 44 scanned by (udev-worker) (2732) May 14 23:54:18.154587 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 44 scanned by (udev-worker) (2731) May 14 23:54:18.187571 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 44 scanned by (udev-worker) (2731) May 14 23:54:18.870803 kubelet[2636]: I0514 23:54:18.870754 2636 topology_manager.go:215] "Topology Admit Handler" podUID="2006f1ce-1a92-4296-931f-47222c578a0b" podNamespace="kube-system" podName="kube-proxy-sjjnp" May 14 23:54:18.885009 systemd[1]: Created slice kubepods-besteffort-pod2006f1ce_1a92_4296_931f_47222c578a0b.slice - libcontainer container kubepods-besteffort-pod2006f1ce_1a92_4296_931f_47222c578a0b.slice. May 14 23:54:18.908809 kubelet[2636]: I0514 23:54:18.908653 2636 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/2006f1ce-1a92-4296-931f-47222c578a0b-kube-proxy\") pod \"kube-proxy-sjjnp\" (UID: \"2006f1ce-1a92-4296-931f-47222c578a0b\") " pod="kube-system/kube-proxy-sjjnp" May 14 23:54:18.908809 kubelet[2636]: I0514 23:54:18.908694 2636 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2006f1ce-1a92-4296-931f-47222c578a0b-xtables-lock\") pod \"kube-proxy-sjjnp\" (UID: \"2006f1ce-1a92-4296-931f-47222c578a0b\") " pod="kube-system/kube-proxy-sjjnp" May 14 23:54:18.908809 kubelet[2636]: I0514 23:54:18.908723 2636 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2006f1ce-1a92-4296-931f-47222c578a0b-lib-modules\") pod \"kube-proxy-sjjnp\" (UID: \"2006f1ce-1a92-4296-931f-47222c578a0b\") " pod="kube-system/kube-proxy-sjjnp" May 14 23:54:18.908809 kubelet[2636]: I0514 23:54:18.908741 2636 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2gd86\" (UniqueName: \"kubernetes.io/projected/2006f1ce-1a92-4296-931f-47222c578a0b-kube-api-access-2gd86\") pod \"kube-proxy-sjjnp\" (UID: \"2006f1ce-1a92-4296-931f-47222c578a0b\") " pod="kube-system/kube-proxy-sjjnp" May 14 23:54:19.036001 kubelet[2636]: I0514 23:54:19.035951 2636 topology_manager.go:215] "Topology Admit Handler" podUID="93274eb5-2374-45af-876e-7ffa2c4a7123" podNamespace="tigera-operator" podName="tigera-operator-797db67f8-4t7fw" May 14 23:54:19.046200 systemd[1]: Created slice kubepods-besteffort-pod93274eb5_2374_45af_876e_7ffa2c4a7123.slice - libcontainer container kubepods-besteffort-pod93274eb5_2374_45af_876e_7ffa2c4a7123.slice. May 14 23:54:19.110304 kubelet[2636]: I0514 23:54:19.110261 2636 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/93274eb5-2374-45af-876e-7ffa2c4a7123-var-lib-calico\") pod \"tigera-operator-797db67f8-4t7fw\" (UID: \"93274eb5-2374-45af-876e-7ffa2c4a7123\") " pod="tigera-operator/tigera-operator-797db67f8-4t7fw" May 14 23:54:19.110304 kubelet[2636]: I0514 23:54:19.110305 2636 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-spjzf\" (UniqueName: \"kubernetes.io/projected/93274eb5-2374-45af-876e-7ffa2c4a7123-kube-api-access-spjzf\") pod \"tigera-operator-797db67f8-4t7fw\" (UID: \"93274eb5-2374-45af-876e-7ffa2c4a7123\") " pod="tigera-operator/tigera-operator-797db67f8-4t7fw" May 14 23:54:19.199860 containerd[1458]: time="2025-05-14T23:54:19.199672613Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-sjjnp,Uid:2006f1ce-1a92-4296-931f-47222c578a0b,Namespace:kube-system,Attempt:0,}" May 14 23:54:19.224563 containerd[1458]: time="2025-05-14T23:54:19.224463958Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 14 23:54:19.224563 containerd[1458]: time="2025-05-14T23:54:19.224524635Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 14 23:54:19.225268 containerd[1458]: time="2025-05-14T23:54:19.225202241Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 23:54:19.225513 containerd[1458]: time="2025-05-14T23:54:19.225472867Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 23:54:19.249801 systemd[1]: Started cri-containerd-8a1c96623317c3c0fc9c38358430ce90fac77af982dcb0cbfd9625f233d5a1da.scope - libcontainer container 8a1c96623317c3c0fc9c38358430ce90fac77af982dcb0cbfd9625f233d5a1da. May 14 23:54:19.270903 containerd[1458]: time="2025-05-14T23:54:19.270864570Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-sjjnp,Uid:2006f1ce-1a92-4296-931f-47222c578a0b,Namespace:kube-system,Attempt:0,} returns sandbox id \"8a1c96623317c3c0fc9c38358430ce90fac77af982dcb0cbfd9625f233d5a1da\"" May 14 23:54:19.277286 containerd[1458]: time="2025-05-14T23:54:19.277239288Z" level=info msg="CreateContainer within sandbox \"8a1c96623317c3c0fc9c38358430ce90fac77af982dcb0cbfd9625f233d5a1da\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 14 23:54:19.292009 containerd[1458]: time="2025-05-14T23:54:19.291956103Z" level=info msg="CreateContainer within sandbox \"8a1c96623317c3c0fc9c38358430ce90fac77af982dcb0cbfd9625f233d5a1da\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"5bb386144e7a919bdd2aeaf9bf0ef86e6ba4916cd584c44caf496c162e9673f2\"" May 14 23:54:19.294389 containerd[1458]: time="2025-05-14T23:54:19.294355462Z" level=info msg="StartContainer for \"5bb386144e7a919bdd2aeaf9bf0ef86e6ba4916cd584c44caf496c162e9673f2\"" May 14 23:54:19.343763 systemd[1]: Started cri-containerd-5bb386144e7a919bdd2aeaf9bf0ef86e6ba4916cd584c44caf496c162e9673f2.scope - libcontainer container 5bb386144e7a919bdd2aeaf9bf0ef86e6ba4916cd584c44caf496c162e9673f2. May 14 23:54:19.349564 containerd[1458]: time="2025-05-14T23:54:19.349300722Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-797db67f8-4t7fw,Uid:93274eb5-2374-45af-876e-7ffa2c4a7123,Namespace:tigera-operator,Attempt:0,}" May 14 23:54:19.415358 containerd[1458]: time="2025-05-14T23:54:19.415290743Z" level=info msg="StartContainer for \"5bb386144e7a919bdd2aeaf9bf0ef86e6ba4916cd584c44caf496c162e9673f2\" returns successfully" May 14 23:54:19.437265 containerd[1458]: time="2025-05-14T23:54:19.437155556Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 14 23:54:19.437265 containerd[1458]: time="2025-05-14T23:54:19.437224033Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 14 23:54:19.437778 containerd[1458]: time="2025-05-14T23:54:19.437238672Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 23:54:19.437852 containerd[1458]: time="2025-05-14T23:54:19.437516298Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 23:54:19.457750 systemd[1]: Started cri-containerd-b67d2b9224a27ba8fad9aeb59f9ddfafaf00ce9d5493c4f9e5ff5cb794735875.scope - libcontainer container b67d2b9224a27ba8fad9aeb59f9ddfafaf00ce9d5493c4f9e5ff5cb794735875. May 14 23:54:19.489409 containerd[1458]: time="2025-05-14T23:54:19.489334436Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-797db67f8-4t7fw,Uid:93274eb5-2374-45af-876e-7ffa2c4a7123,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"b67d2b9224a27ba8fad9aeb59f9ddfafaf00ce9d5493c4f9e5ff5cb794735875\"" May 14 23:54:19.492097 containerd[1458]: time="2025-05-14T23:54:19.491271698Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\"" May 14 23:54:19.893064 kubelet[2636]: I0514 23:54:19.892932 2636 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-sjjnp" podStartSLOduration=1.892915335 podStartE2EDuration="1.892915335s" podCreationTimestamp="2025-05-14 23:54:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 23:54:19.5880718 +0000 UTC m=+15.135380765" watchObservedRunningTime="2025-05-14 23:54:19.892915335 +0000 UTC m=+15.440224260" May 14 23:54:20.963590 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount384372464.mount: Deactivated successfully. May 14 23:54:21.589340 containerd[1458]: time="2025-05-14T23:54:21.589296587Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:54:21.589848 containerd[1458]: time="2025-05-14T23:54:21.589715328Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.7: active requests=0, bytes read=19323084" May 14 23:54:21.590771 containerd[1458]: time="2025-05-14T23:54:21.590741961Z" level=info msg="ImageCreate event name:\"sha256:27f7c2cfac802523e44ecd16453a4cc992f6c7d610c13054f2715a7cb4370565\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:54:21.592645 containerd[1458]: time="2025-05-14T23:54:21.592607555Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:54:21.594414 containerd[1458]: time="2025-05-14T23:54:21.594379514Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.7\" with image id \"sha256:27f7c2cfac802523e44ecd16453a4cc992f6c7d610c13054f2715a7cb4370565\", repo tag \"quay.io/tigera/operator:v1.36.7\", repo digest \"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\", size \"19319079\" in 2.103058178s" May 14 23:54:21.594457 containerd[1458]: time="2025-05-14T23:54:21.594414392Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\" returns image reference \"sha256:27f7c2cfac802523e44ecd16453a4cc992f6c7d610c13054f2715a7cb4370565\"" May 14 23:54:21.598002 containerd[1458]: time="2025-05-14T23:54:21.597884153Z" level=info msg="CreateContainer within sandbox \"b67d2b9224a27ba8fad9aeb59f9ddfafaf00ce9d5493c4f9e5ff5cb794735875\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" May 14 23:54:21.607719 containerd[1458]: time="2025-05-14T23:54:21.607680343Z" level=info msg="CreateContainer within sandbox \"b67d2b9224a27ba8fad9aeb59f9ddfafaf00ce9d5493c4f9e5ff5cb794735875\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"3b2dba16394dde54254123009c0a9fa4ae2dace73af137216f07a06b3a5b8802\"" May 14 23:54:21.609692 containerd[1458]: time="2025-05-14T23:54:21.609653613Z" level=info msg="StartContainer for \"3b2dba16394dde54254123009c0a9fa4ae2dace73af137216f07a06b3a5b8802\"" May 14 23:54:21.639747 systemd[1]: Started cri-containerd-3b2dba16394dde54254123009c0a9fa4ae2dace73af137216f07a06b3a5b8802.scope - libcontainer container 3b2dba16394dde54254123009c0a9fa4ae2dace73af137216f07a06b3a5b8802. May 14 23:54:21.662361 containerd[1458]: time="2025-05-14T23:54:21.662320794Z" level=info msg="StartContainer for \"3b2dba16394dde54254123009c0a9fa4ae2dace73af137216f07a06b3a5b8802\" returns successfully" May 14 23:54:22.604431 kubelet[2636]: I0514 23:54:22.603859 2636 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-797db67f8-4t7fw" podStartSLOduration=1.497859002 podStartE2EDuration="3.603837363s" podCreationTimestamp="2025-05-14 23:54:19 +0000 UTC" firstStartedPulling="2025-05-14 23:54:19.490659129 +0000 UTC m=+15.037968094" lastFinishedPulling="2025-05-14 23:54:21.59663749 +0000 UTC m=+17.143946455" observedRunningTime="2025-05-14 23:54:22.600471791 +0000 UTC m=+18.147780756" watchObservedRunningTime="2025-05-14 23:54:22.603837363 +0000 UTC m=+18.151146368" May 14 23:54:22.605819 systemd[1]: run-containerd-runc-k8s.io-3b2dba16394dde54254123009c0a9fa4ae2dace73af137216f07a06b3a5b8802-runc.dLJqAP.mount: Deactivated successfully. May 14 23:54:25.218565 kubelet[2636]: I0514 23:54:25.217259 2636 topology_manager.go:215] "Topology Admit Handler" podUID="98b0ca82-214c-4ac9-a6f0-0cf3ceb415c1" podNamespace="calico-system" podName="calico-typha-cfbfc989d-8ggg9" May 14 23:54:25.234110 systemd[1]: Created slice kubepods-besteffort-pod98b0ca82_214c_4ac9_a6f0_0cf3ceb415c1.slice - libcontainer container kubepods-besteffort-pod98b0ca82_214c_4ac9_a6f0_0cf3ceb415c1.slice. May 14 23:54:25.262008 kubelet[2636]: I0514 23:54:25.261970 2636 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/98b0ca82-214c-4ac9-a6f0-0cf3ceb415c1-tigera-ca-bundle\") pod \"calico-typha-cfbfc989d-8ggg9\" (UID: \"98b0ca82-214c-4ac9-a6f0-0cf3ceb415c1\") " pod="calico-system/calico-typha-cfbfc989d-8ggg9" May 14 23:54:25.262199 kubelet[2636]: I0514 23:54:25.262183 2636 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/98b0ca82-214c-4ac9-a6f0-0cf3ceb415c1-typha-certs\") pod \"calico-typha-cfbfc989d-8ggg9\" (UID: \"98b0ca82-214c-4ac9-a6f0-0cf3ceb415c1\") " pod="calico-system/calico-typha-cfbfc989d-8ggg9" May 14 23:54:25.262280 kubelet[2636]: I0514 23:54:25.262267 2636 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2q6v9\" (UniqueName: \"kubernetes.io/projected/98b0ca82-214c-4ac9-a6f0-0cf3ceb415c1-kube-api-access-2q6v9\") pod \"calico-typha-cfbfc989d-8ggg9\" (UID: \"98b0ca82-214c-4ac9-a6f0-0cf3ceb415c1\") " pod="calico-system/calico-typha-cfbfc989d-8ggg9" May 14 23:54:25.313560 kubelet[2636]: I0514 23:54:25.311192 2636 topology_manager.go:215] "Topology Admit Handler" podUID="453716de-4090-4e02-9946-071325d4cabb" podNamespace="calico-system" podName="calico-node-vsql8" May 14 23:54:25.324383 systemd[1]: Created slice kubepods-besteffort-pod453716de_4090_4e02_9946_071325d4cabb.slice - libcontainer container kubepods-besteffort-pod453716de_4090_4e02_9946_071325d4cabb.slice. May 14 23:54:25.420145 kubelet[2636]: I0514 23:54:25.419796 2636 topology_manager.go:215] "Topology Admit Handler" podUID="78b8ecbd-ebea-41ed-a71c-8ddd96e45e21" podNamespace="calico-system" podName="csi-node-driver-xq692" May 14 23:54:25.425186 kubelet[2636]: E0514 23:54:25.425088 2636 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xq692" podUID="78b8ecbd-ebea-41ed-a71c-8ddd96e45e21" May 14 23:54:25.463047 kubelet[2636]: I0514 23:54:25.462906 2636 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/453716de-4090-4e02-9946-071325d4cabb-xtables-lock\") pod \"calico-node-vsql8\" (UID: \"453716de-4090-4e02-9946-071325d4cabb\") " pod="calico-system/calico-node-vsql8" May 14 23:54:25.463047 kubelet[2636]: I0514 23:54:25.462955 2636 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/453716de-4090-4e02-9946-071325d4cabb-node-certs\") pod \"calico-node-vsql8\" (UID: \"453716de-4090-4e02-9946-071325d4cabb\") " pod="calico-system/calico-node-vsql8" May 14 23:54:25.463047 kubelet[2636]: I0514 23:54:25.462980 2636 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/453716de-4090-4e02-9946-071325d4cabb-cni-bin-dir\") pod \"calico-node-vsql8\" (UID: \"453716de-4090-4e02-9946-071325d4cabb\") " pod="calico-system/calico-node-vsql8" May 14 23:54:25.463047 kubelet[2636]: I0514 23:54:25.462999 2636 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/453716de-4090-4e02-9946-071325d4cabb-lib-modules\") pod \"calico-node-vsql8\" (UID: \"453716de-4090-4e02-9946-071325d4cabb\") " pod="calico-system/calico-node-vsql8" May 14 23:54:25.463047 kubelet[2636]: I0514 23:54:25.463017 2636 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/453716de-4090-4e02-9946-071325d4cabb-var-run-calico\") pod \"calico-node-vsql8\" (UID: \"453716de-4090-4e02-9946-071325d4cabb\") " pod="calico-system/calico-node-vsql8" May 14 23:54:25.463299 kubelet[2636]: I0514 23:54:25.463050 2636 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/453716de-4090-4e02-9946-071325d4cabb-var-lib-calico\") pod \"calico-node-vsql8\" (UID: \"453716de-4090-4e02-9946-071325d4cabb\") " pod="calico-system/calico-node-vsql8" May 14 23:54:25.463299 kubelet[2636]: I0514 23:54:25.463066 2636 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/453716de-4090-4e02-9946-071325d4cabb-cni-log-dir\") pod \"calico-node-vsql8\" (UID: \"453716de-4090-4e02-9946-071325d4cabb\") " pod="calico-system/calico-node-vsql8" May 14 23:54:25.463299 kubelet[2636]: I0514 23:54:25.463087 2636 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/453716de-4090-4e02-9946-071325d4cabb-tigera-ca-bundle\") pod \"calico-node-vsql8\" (UID: \"453716de-4090-4e02-9946-071325d4cabb\") " pod="calico-system/calico-node-vsql8" May 14 23:54:25.463299 kubelet[2636]: I0514 23:54:25.463103 2636 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/453716de-4090-4e02-9946-071325d4cabb-policysync\") pod \"calico-node-vsql8\" (UID: \"453716de-4090-4e02-9946-071325d4cabb\") " pod="calico-system/calico-node-vsql8" May 14 23:54:25.463299 kubelet[2636]: I0514 23:54:25.463120 2636 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/453716de-4090-4e02-9946-071325d4cabb-cni-net-dir\") pod \"calico-node-vsql8\" (UID: \"453716de-4090-4e02-9946-071325d4cabb\") " pod="calico-system/calico-node-vsql8" May 14 23:54:25.463433 kubelet[2636]: I0514 23:54:25.463147 2636 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/453716de-4090-4e02-9946-071325d4cabb-flexvol-driver-host\") pod \"calico-node-vsql8\" (UID: \"453716de-4090-4e02-9946-071325d4cabb\") " pod="calico-system/calico-node-vsql8" May 14 23:54:25.463433 kubelet[2636]: I0514 23:54:25.463167 2636 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qlh4w\" (UniqueName: \"kubernetes.io/projected/453716de-4090-4e02-9946-071325d4cabb-kube-api-access-qlh4w\") pod \"calico-node-vsql8\" (UID: \"453716de-4090-4e02-9946-071325d4cabb\") " pod="calico-system/calico-node-vsql8" May 14 23:54:25.547163 containerd[1458]: time="2025-05-14T23:54:25.547039185Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-cfbfc989d-8ggg9,Uid:98b0ca82-214c-4ac9-a6f0-0cf3ceb415c1,Namespace:calico-system,Attempt:0,}" May 14 23:54:25.568585 kubelet[2636]: I0514 23:54:25.567706 2636 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/78b8ecbd-ebea-41ed-a71c-8ddd96e45e21-kubelet-dir\") pod \"csi-node-driver-xq692\" (UID: \"78b8ecbd-ebea-41ed-a71c-8ddd96e45e21\") " pod="calico-system/csi-node-driver-xq692" May 14 23:54:25.568585 kubelet[2636]: I0514 23:54:25.567795 2636 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dd8t5\" (UniqueName: \"kubernetes.io/projected/78b8ecbd-ebea-41ed-a71c-8ddd96e45e21-kube-api-access-dd8t5\") pod \"csi-node-driver-xq692\" (UID: \"78b8ecbd-ebea-41ed-a71c-8ddd96e45e21\") " pod="calico-system/csi-node-driver-xq692" May 14 23:54:25.568585 kubelet[2636]: I0514 23:54:25.567837 2636 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/78b8ecbd-ebea-41ed-a71c-8ddd96e45e21-registration-dir\") pod \"csi-node-driver-xq692\" (UID: \"78b8ecbd-ebea-41ed-a71c-8ddd96e45e21\") " pod="calico-system/csi-node-driver-xq692" May 14 23:54:25.568585 kubelet[2636]: I0514 23:54:25.567858 2636 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/78b8ecbd-ebea-41ed-a71c-8ddd96e45e21-varrun\") pod \"csi-node-driver-xq692\" (UID: \"78b8ecbd-ebea-41ed-a71c-8ddd96e45e21\") " pod="calico-system/csi-node-driver-xq692" May 14 23:54:25.568585 kubelet[2636]: I0514 23:54:25.567924 2636 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/78b8ecbd-ebea-41ed-a71c-8ddd96e45e21-socket-dir\") pod \"csi-node-driver-xq692\" (UID: \"78b8ecbd-ebea-41ed-a71c-8ddd96e45e21\") " pod="calico-system/csi-node-driver-xq692" May 14 23:54:25.572256 kubelet[2636]: E0514 23:54:25.572143 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:54:25.572256 kubelet[2636]: W0514 23:54:25.572167 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:54:25.572256 kubelet[2636]: E0514 23:54:25.572185 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:54:25.577433 containerd[1458]: time="2025-05-14T23:54:25.577109317Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 14 23:54:25.577433 containerd[1458]: time="2025-05-14T23:54:25.577184834Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 14 23:54:25.577433 containerd[1458]: time="2025-05-14T23:54:25.577199713Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 23:54:25.577433 containerd[1458]: time="2025-05-14T23:54:25.577303989Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 23:54:25.587327 kubelet[2636]: E0514 23:54:25.587203 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:54:25.587327 kubelet[2636]: W0514 23:54:25.587225 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:54:25.587327 kubelet[2636]: E0514 23:54:25.587273 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:54:25.607733 systemd[1]: Started cri-containerd-74c01ebae5aace44009816f446af0721bf9efbb45ba451436c8b182085e908ba.scope - libcontainer container 74c01ebae5aace44009816f446af0721bf9efbb45ba451436c8b182085e908ba. May 14 23:54:25.628266 containerd[1458]: time="2025-05-14T23:54:25.628227045Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-vsql8,Uid:453716de-4090-4e02-9946-071325d4cabb,Namespace:calico-system,Attempt:0,}" May 14 23:54:25.643367 containerd[1458]: time="2025-05-14T23:54:25.643298189Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-cfbfc989d-8ggg9,Uid:98b0ca82-214c-4ac9-a6f0-0cf3ceb415c1,Namespace:calico-system,Attempt:0,} returns sandbox id \"74c01ebae5aace44009816f446af0721bf9efbb45ba451436c8b182085e908ba\"" May 14 23:54:25.648159 containerd[1458]: time="2025-05-14T23:54:25.646892052Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\"" May 14 23:54:25.669123 kubelet[2636]: E0514 23:54:25.669094 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:54:25.669123 kubelet[2636]: W0514 23:54:25.669117 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:54:25.669354 kubelet[2636]: E0514 23:54:25.669137 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:54:25.669391 kubelet[2636]: E0514 23:54:25.669376 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:54:25.669421 kubelet[2636]: W0514 23:54:25.669392 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:54:25.669421 kubelet[2636]: E0514 23:54:25.669409 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:54:25.669648 kubelet[2636]: E0514 23:54:25.669631 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:54:25.669648 kubelet[2636]: W0514 23:54:25.669645 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:54:25.669774 kubelet[2636]: E0514 23:54:25.669659 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:54:25.669868 kubelet[2636]: E0514 23:54:25.669856 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:54:25.669868 kubelet[2636]: W0514 23:54:25.669867 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:54:25.670042 kubelet[2636]: E0514 23:54:25.669881 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:54:25.670141 kubelet[2636]: E0514 23:54:25.670120 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:54:25.670213 kubelet[2636]: W0514 23:54:25.670199 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:54:25.670287 kubelet[2636]: E0514 23:54:25.670274 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:54:25.670596 kubelet[2636]: E0514 23:54:25.670575 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:54:25.670596 kubelet[2636]: W0514 23:54:25.670592 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:54:25.670688 kubelet[2636]: E0514 23:54:25.670608 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:54:25.670792 kubelet[2636]: E0514 23:54:25.670762 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:54:25.670792 kubelet[2636]: W0514 23:54:25.670774 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:54:25.670853 kubelet[2636]: E0514 23:54:25.670792 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:54:25.670964 kubelet[2636]: E0514 23:54:25.670951 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:54:25.670964 kubelet[2636]: W0514 23:54:25.670963 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:54:25.671028 kubelet[2636]: E0514 23:54:25.671007 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:54:25.671170 kubelet[2636]: E0514 23:54:25.671140 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:54:25.671170 kubelet[2636]: W0514 23:54:25.671154 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:54:25.671234 kubelet[2636]: E0514 23:54:25.671222 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:54:25.671299 kubelet[2636]: E0514 23:54:25.671283 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:54:25.671299 kubelet[2636]: W0514 23:54:25.671299 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:54:25.671386 kubelet[2636]: E0514 23:54:25.671362 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:54:25.671476 kubelet[2636]: E0514 23:54:25.671465 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:54:25.671476 kubelet[2636]: W0514 23:54:25.671475 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:54:25.671521 kubelet[2636]: E0514 23:54:25.671497 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:54:25.671665 kubelet[2636]: E0514 23:54:25.671654 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:54:25.671665 kubelet[2636]: W0514 23:54:25.671664 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:54:25.671769 kubelet[2636]: E0514 23:54:25.671740 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:54:25.671871 kubelet[2636]: E0514 23:54:25.671859 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:54:25.671871 kubelet[2636]: W0514 23:54:25.671870 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:54:25.671924 kubelet[2636]: E0514 23:54:25.671886 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:54:25.672073 kubelet[2636]: E0514 23:54:25.672063 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:54:25.672073 kubelet[2636]: W0514 23:54:25.672073 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:54:25.672131 kubelet[2636]: E0514 23:54:25.672088 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:54:25.672237 kubelet[2636]: E0514 23:54:25.672227 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:54:25.672237 kubelet[2636]: W0514 23:54:25.672236 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:54:25.672292 kubelet[2636]: E0514 23:54:25.672247 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:54:25.672650 kubelet[2636]: E0514 23:54:25.672597 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:54:25.672650 kubelet[2636]: W0514 23:54:25.672620 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:54:25.672650 kubelet[2636]: E0514 23:54:25.672639 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:54:25.672907 kubelet[2636]: E0514 23:54:25.672817 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:54:25.672907 kubelet[2636]: W0514 23:54:25.672833 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:54:25.672907 kubelet[2636]: E0514 23:54:25.672859 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:54:25.675293 kubelet[2636]: E0514 23:54:25.675142 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:54:25.675293 kubelet[2636]: W0514 23:54:25.675162 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:54:25.675293 kubelet[2636]: E0514 23:54:25.675197 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:54:25.675713 kubelet[2636]: E0514 23:54:25.675412 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:54:25.675713 kubelet[2636]: W0514 23:54:25.675429 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:54:25.675713 kubelet[2636]: E0514 23:54:25.675452 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:54:25.675713 kubelet[2636]: E0514 23:54:25.675591 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:54:25.675713 kubelet[2636]: W0514 23:54:25.675601 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:54:25.675713 kubelet[2636]: E0514 23:54:25.675639 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:54:25.675887 kubelet[2636]: E0514 23:54:25.675816 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:54:25.675887 kubelet[2636]: W0514 23:54:25.675826 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:54:25.675887 kubelet[2636]: E0514 23:54:25.675864 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:54:25.676080 kubelet[2636]: E0514 23:54:25.675971 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:54:25.676080 kubelet[2636]: W0514 23:54:25.675984 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:54:25.676080 kubelet[2636]: E0514 23:54:25.676000 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:54:25.676202 kubelet[2636]: E0514 23:54:25.676185 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:54:25.676202 kubelet[2636]: W0514 23:54:25.676198 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:54:25.676452 kubelet[2636]: E0514 23:54:25.676213 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:54:25.676452 kubelet[2636]: E0514 23:54:25.676431 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:54:25.676452 kubelet[2636]: W0514 23:54:25.676442 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:54:25.676525 kubelet[2636]: E0514 23:54:25.676458 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:54:25.676723 kubelet[2636]: E0514 23:54:25.676708 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:54:25.676723 kubelet[2636]: W0514 23:54:25.676721 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:54:25.676798 kubelet[2636]: E0514 23:54:25.676732 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:54:25.689091 kubelet[2636]: E0514 23:54:25.689049 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:54:25.689091 kubelet[2636]: W0514 23:54:25.689072 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:54:25.689091 kubelet[2636]: E0514 23:54:25.689092 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:54:25.698846 containerd[1458]: time="2025-05-14T23:54:25.698141615Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 14 23:54:25.698846 containerd[1458]: time="2025-05-14T23:54:25.698745312Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 14 23:54:25.698846 containerd[1458]: time="2025-05-14T23:54:25.698758432Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 23:54:25.699015 containerd[1458]: time="2025-05-14T23:54:25.698849308Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 23:54:25.717759 systemd[1]: Started cri-containerd-310ff5fb55410f4fa79f80560079d320cec907dd8b9d461034f5b8ddcf3c99b2.scope - libcontainer container 310ff5fb55410f4fa79f80560079d320cec907dd8b9d461034f5b8ddcf3c99b2. May 14 23:54:25.741227 containerd[1458]: time="2025-05-14T23:54:25.741181812Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-vsql8,Uid:453716de-4090-4e02-9946-071325d4cabb,Namespace:calico-system,Attempt:0,} returns sandbox id \"310ff5fb55410f4fa79f80560079d320cec907dd8b9d461034f5b8ddcf3c99b2\"" May 14 23:54:26.965410 containerd[1458]: time="2025-05-14T23:54:26.965357051Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:54:26.966051 containerd[1458]: time="2025-05-14T23:54:26.965985788Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.3: active requests=0, bytes read=28370571" May 14 23:54:26.966618 containerd[1458]: time="2025-05-14T23:54:26.966587046Z" level=info msg="ImageCreate event name:\"sha256:26e730979a07ea7452715da6ac48076016018bc982c06ebd32d5e095f42d3d54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:54:26.968693 containerd[1458]: time="2025-05-14T23:54:26.968653131Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:54:26.969483 containerd[1458]: time="2025-05-14T23:54:26.969266428Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.3\" with image id \"sha256:26e730979a07ea7452715da6ac48076016018bc982c06ebd32d5e095f42d3d54\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\", size \"29739745\" in 1.32097115s" May 14 23:54:26.969483 containerd[1458]: time="2025-05-14T23:54:26.969297067Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\" returns image reference \"sha256:26e730979a07ea7452715da6ac48076016018bc982c06ebd32d5e095f42d3d54\"" May 14 23:54:26.971400 containerd[1458]: time="2025-05-14T23:54:26.970494943Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\"" May 14 23:54:26.981971 containerd[1458]: time="2025-05-14T23:54:26.981858248Z" level=info msg="CreateContainer within sandbox \"74c01ebae5aace44009816f446af0721bf9efbb45ba451436c8b182085e908ba\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" May 14 23:54:26.992721 containerd[1458]: time="2025-05-14T23:54:26.992681333Z" level=info msg="CreateContainer within sandbox \"74c01ebae5aace44009816f446af0721bf9efbb45ba451436c8b182085e908ba\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"642c218ec670c1208ca7307e28d483ebe50a7bd99e0c74653d7a107eb45574c6\"" May 14 23:54:26.993568 containerd[1458]: time="2025-05-14T23:54:26.993138036Z" level=info msg="StartContainer for \"642c218ec670c1208ca7307e28d483ebe50a7bd99e0c74653d7a107eb45574c6\"" May 14 23:54:27.021718 systemd[1]: Started cri-containerd-642c218ec670c1208ca7307e28d483ebe50a7bd99e0c74653d7a107eb45574c6.scope - libcontainer container 642c218ec670c1208ca7307e28d483ebe50a7bd99e0c74653d7a107eb45574c6. May 14 23:54:27.100340 containerd[1458]: time="2025-05-14T23:54:27.100282554Z" level=info msg="StartContainer for \"642c218ec670c1208ca7307e28d483ebe50a7bd99e0c74653d7a107eb45574c6\" returns successfully" May 14 23:54:27.530918 kubelet[2636]: E0514 23:54:27.530824 2636 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xq692" podUID="78b8ecbd-ebea-41ed-a71c-8ddd96e45e21" May 14 23:54:27.630578 kubelet[2636]: I0514 23:54:27.630068 2636 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-cfbfc989d-8ggg9" podStartSLOduration=1.306382042 podStartE2EDuration="2.63005349s" podCreationTimestamp="2025-05-14 23:54:25 +0000 UTC" firstStartedPulling="2025-05-14 23:54:25.646634022 +0000 UTC m=+21.193942987" lastFinishedPulling="2025-05-14 23:54:26.97030547 +0000 UTC m=+22.517614435" observedRunningTime="2025-05-14 23:54:27.629274837 +0000 UTC m=+23.176583802" watchObservedRunningTime="2025-05-14 23:54:27.63005349 +0000 UTC m=+23.177362455" May 14 23:54:27.683336 kubelet[2636]: E0514 23:54:27.683216 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:54:27.683336 kubelet[2636]: W0514 23:54:27.683245 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:54:27.683336 kubelet[2636]: E0514 23:54:27.683263 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:54:27.683631 kubelet[2636]: E0514 23:54:27.683428 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:54:27.683631 kubelet[2636]: W0514 23:54:27.683437 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:54:27.683631 kubelet[2636]: E0514 23:54:27.683446 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:54:27.683631 kubelet[2636]: E0514 23:54:27.683598 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:54:27.683631 kubelet[2636]: W0514 23:54:27.683605 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:54:27.683631 kubelet[2636]: E0514 23:54:27.683613 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:54:27.683827 kubelet[2636]: E0514 23:54:27.683787 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:54:27.683827 kubelet[2636]: W0514 23:54:27.683798 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:54:27.683827 kubelet[2636]: E0514 23:54:27.683806 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:54:27.683965 kubelet[2636]: E0514 23:54:27.683944 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:54:27.683965 kubelet[2636]: W0514 23:54:27.683954 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:54:27.683965 kubelet[2636]: E0514 23:54:27.683962 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:54:27.684112 kubelet[2636]: E0514 23:54:27.684088 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:54:27.684112 kubelet[2636]: W0514 23:54:27.684097 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:54:27.684112 kubelet[2636]: E0514 23:54:27.684104 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:54:27.684323 kubelet[2636]: E0514 23:54:27.684306 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:54:27.684323 kubelet[2636]: W0514 23:54:27.684316 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:54:27.684323 kubelet[2636]: E0514 23:54:27.684325 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:54:27.684824 kubelet[2636]: E0514 23:54:27.684498 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:54:27.684824 kubelet[2636]: W0514 23:54:27.684509 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:54:27.684824 kubelet[2636]: E0514 23:54:27.684517 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:54:27.684824 kubelet[2636]: E0514 23:54:27.684700 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:54:27.684824 kubelet[2636]: W0514 23:54:27.684708 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:54:27.684824 kubelet[2636]: E0514 23:54:27.684715 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:54:27.685055 kubelet[2636]: E0514 23:54:27.685030 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:54:27.685055 kubelet[2636]: W0514 23:54:27.685045 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:54:27.685119 kubelet[2636]: E0514 23:54:27.685056 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:54:27.685259 kubelet[2636]: E0514 23:54:27.685232 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:54:27.685259 kubelet[2636]: W0514 23:54:27.685245 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:54:27.685259 kubelet[2636]: E0514 23:54:27.685253 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:54:27.685410 kubelet[2636]: E0514 23:54:27.685392 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:54:27.685410 kubelet[2636]: W0514 23:54:27.685403 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:54:27.685410 kubelet[2636]: E0514 23:54:27.685410 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:54:27.685586 kubelet[2636]: E0514 23:54:27.685575 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:54:27.685586 kubelet[2636]: W0514 23:54:27.685585 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:54:27.685635 kubelet[2636]: E0514 23:54:27.685592 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:54:27.685732 kubelet[2636]: E0514 23:54:27.685721 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:54:27.685732 kubelet[2636]: W0514 23:54:27.685730 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:54:27.685777 kubelet[2636]: E0514 23:54:27.685738 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:54:27.685887 kubelet[2636]: E0514 23:54:27.685877 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:54:27.685887 kubelet[2636]: W0514 23:54:27.685886 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:54:27.685932 kubelet[2636]: E0514 23:54:27.685893 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:54:27.686143 kubelet[2636]: E0514 23:54:27.686122 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:54:27.686143 kubelet[2636]: W0514 23:54:27.686136 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:54:27.686192 kubelet[2636]: E0514 23:54:27.686144 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:54:27.686307 kubelet[2636]: E0514 23:54:27.686291 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:54:27.686334 kubelet[2636]: W0514 23:54:27.686308 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:54:27.686334 kubelet[2636]: E0514 23:54:27.686324 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:54:27.686490 kubelet[2636]: E0514 23:54:27.686479 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:54:27.686517 kubelet[2636]: W0514 23:54:27.686490 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:54:27.686517 kubelet[2636]: E0514 23:54:27.686502 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:54:27.686726 kubelet[2636]: E0514 23:54:27.686706 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:54:27.686758 kubelet[2636]: W0514 23:54:27.686724 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:54:27.686758 kubelet[2636]: E0514 23:54:27.686744 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:54:27.686913 kubelet[2636]: E0514 23:54:27.686902 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:54:27.686913 kubelet[2636]: W0514 23:54:27.686913 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:54:27.686966 kubelet[2636]: E0514 23:54:27.686925 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:54:27.687076 kubelet[2636]: E0514 23:54:27.687066 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:54:27.687076 kubelet[2636]: W0514 23:54:27.687076 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:54:27.687129 kubelet[2636]: E0514 23:54:27.687089 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:54:27.687242 kubelet[2636]: E0514 23:54:27.687232 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:54:27.687242 kubelet[2636]: W0514 23:54:27.687242 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:54:27.687294 kubelet[2636]: E0514 23:54:27.687255 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:54:27.687522 kubelet[2636]: E0514 23:54:27.687507 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:54:27.687522 kubelet[2636]: W0514 23:54:27.687521 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:54:27.687618 kubelet[2636]: E0514 23:54:27.687546 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:54:27.687746 kubelet[2636]: E0514 23:54:27.687735 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:54:27.687746 kubelet[2636]: W0514 23:54:27.687745 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:54:27.687801 kubelet[2636]: E0514 23:54:27.687769 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:54:27.687909 kubelet[2636]: E0514 23:54:27.687900 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:54:27.687909 kubelet[2636]: W0514 23:54:27.687909 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:54:27.687962 kubelet[2636]: E0514 23:54:27.687930 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:54:27.688072 kubelet[2636]: E0514 23:54:27.688061 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:54:27.688072 kubelet[2636]: W0514 23:54:27.688071 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:54:27.688121 kubelet[2636]: E0514 23:54:27.688085 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:54:27.688253 kubelet[2636]: E0514 23:54:27.688242 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:54:27.688253 kubelet[2636]: W0514 23:54:27.688252 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:54:27.688311 kubelet[2636]: E0514 23:54:27.688264 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:54:27.688433 kubelet[2636]: E0514 23:54:27.688422 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:54:27.688433 kubelet[2636]: W0514 23:54:27.688433 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:54:27.688484 kubelet[2636]: E0514 23:54:27.688445 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:54:27.688735 kubelet[2636]: E0514 23:54:27.688721 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:54:27.688735 kubelet[2636]: W0514 23:54:27.688735 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:54:27.688805 kubelet[2636]: E0514 23:54:27.688750 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:54:27.688923 kubelet[2636]: E0514 23:54:27.688913 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:54:27.688923 kubelet[2636]: W0514 23:54:27.688923 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:54:27.688982 kubelet[2636]: E0514 23:54:27.688934 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:54:27.689104 kubelet[2636]: E0514 23:54:27.689092 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:54:27.689104 kubelet[2636]: W0514 23:54:27.689102 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:54:27.689161 kubelet[2636]: E0514 23:54:27.689115 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:54:27.690026 kubelet[2636]: E0514 23:54:27.689967 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:54:27.690026 kubelet[2636]: W0514 23:54:27.689984 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:54:27.690026 kubelet[2636]: E0514 23:54:27.689998 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:54:27.693078 kubelet[2636]: E0514 23:54:27.693055 2636 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 23:54:27.693078 kubelet[2636]: W0514 23:54:27.693073 2636 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 23:54:27.693211 kubelet[2636]: E0514 23:54:27.693085 2636 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 23:54:28.261106 containerd[1458]: time="2025-05-14T23:54:28.260820665Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:54:28.261490 containerd[1458]: time="2025-05-14T23:54:28.261254691Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3: active requests=0, bytes read=5122903" May 14 23:54:28.262224 containerd[1458]: time="2025-05-14T23:54:28.262180259Z" level=info msg="ImageCreate event name:\"sha256:dd8e710a588cc6f5834c4d84f7e12458efae593d3dfe527ca9e757c89239ecb8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:54:28.264597 containerd[1458]: time="2025-05-14T23:54:28.264519781Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:54:28.265144 containerd[1458]: time="2025-05-14T23:54:28.265108961Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" with image id \"sha256:dd8e710a588cc6f5834c4d84f7e12458efae593d3dfe527ca9e757c89239ecb8\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\", size \"6492045\" in 1.294583658s" May 14 23:54:28.265184 containerd[1458]: time="2025-05-14T23:54:28.265145120Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" returns image reference \"sha256:dd8e710a588cc6f5834c4d84f7e12458efae593d3dfe527ca9e757c89239ecb8\"" May 14 23:54:28.267752 containerd[1458]: time="2025-05-14T23:54:28.267661716Z" level=info msg="CreateContainer within sandbox \"310ff5fb55410f4fa79f80560079d320cec907dd8b9d461034f5b8ddcf3c99b2\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 14 23:54:28.278957 containerd[1458]: time="2025-05-14T23:54:28.278918178Z" level=info msg="CreateContainer within sandbox \"310ff5fb55410f4fa79f80560079d320cec907dd8b9d461034f5b8ddcf3c99b2\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"240502fc742e88f4c569cf23727c240d4bef7777ce498e730597497bcacd052b\"" May 14 23:54:28.280286 containerd[1458]: time="2025-05-14T23:54:28.279338444Z" level=info msg="StartContainer for \"240502fc742e88f4c569cf23727c240d4bef7777ce498e730597497bcacd052b\"" May 14 23:54:28.303691 systemd[1]: Started cri-containerd-240502fc742e88f4c569cf23727c240d4bef7777ce498e730597497bcacd052b.scope - libcontainer container 240502fc742e88f4c569cf23727c240d4bef7777ce498e730597497bcacd052b. May 14 23:54:28.327123 containerd[1458]: time="2025-05-14T23:54:28.327034883Z" level=info msg="StartContainer for \"240502fc742e88f4c569cf23727c240d4bef7777ce498e730597497bcacd052b\" returns successfully" May 14 23:54:28.359730 systemd[1]: cri-containerd-240502fc742e88f4c569cf23727c240d4bef7777ce498e730597497bcacd052b.scope: Deactivated successfully. May 14 23:54:28.359984 systemd[1]: cri-containerd-240502fc742e88f4c569cf23727c240d4bef7777ce498e730597497bcacd052b.scope: Consumed 43ms CPU time, 7.9M memory peak, 6.2M written to disk. May 14 23:54:28.397090 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-240502fc742e88f4c569cf23727c240d4bef7777ce498e730597497bcacd052b-rootfs.mount: Deactivated successfully. May 14 23:54:28.417861 containerd[1458]: time="2025-05-14T23:54:28.417621922Z" level=info msg="shim disconnected" id=240502fc742e88f4c569cf23727c240d4bef7777ce498e730597497bcacd052b namespace=k8s.io May 14 23:54:28.417861 containerd[1458]: time="2025-05-14T23:54:28.417698720Z" level=warning msg="cleaning up after shim disconnected" id=240502fc742e88f4c569cf23727c240d4bef7777ce498e730597497bcacd052b namespace=k8s.io May 14 23:54:28.417861 containerd[1458]: time="2025-05-14T23:54:28.417707520Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 14 23:54:28.624938 containerd[1458]: time="2025-05-14T23:54:28.624871767Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\"" May 14 23:54:28.626296 kubelet[2636]: I0514 23:54:28.626263 2636 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 14 23:54:29.530856 kubelet[2636]: E0514 23:54:29.530792 2636 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xq692" podUID="78b8ecbd-ebea-41ed-a71c-8ddd96e45e21" May 14 23:54:31.530597 kubelet[2636]: E0514 23:54:31.530548 2636 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xq692" podUID="78b8ecbd-ebea-41ed-a71c-8ddd96e45e21" May 14 23:54:31.845400 containerd[1458]: time="2025-05-14T23:54:31.845346228Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:54:31.845975 containerd[1458]: time="2025-05-14T23:54:31.845940930Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.3: active requests=0, bytes read=91256270" May 14 23:54:31.846777 containerd[1458]: time="2025-05-14T23:54:31.846751626Z" level=info msg="ImageCreate event name:\"sha256:add6372545fb406bb017769f222d84c50549ce13e3b19f1fbaee3d8a4aaef627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:54:31.848836 containerd[1458]: time="2025-05-14T23:54:31.848779566Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:54:31.849606 containerd[1458]: time="2025-05-14T23:54:31.849575302Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.3\" with image id \"sha256:add6372545fb406bb017769f222d84c50549ce13e3b19f1fbaee3d8a4aaef627\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\", size \"92625452\" in 3.224669256s" May 14 23:54:31.849661 containerd[1458]: time="2025-05-14T23:54:31.849607421Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\" returns image reference \"sha256:add6372545fb406bb017769f222d84c50549ce13e3b19f1fbaee3d8a4aaef627\"" May 14 23:54:31.853251 containerd[1458]: time="2025-05-14T23:54:31.853215554Z" level=info msg="CreateContainer within sandbox \"310ff5fb55410f4fa79f80560079d320cec907dd8b9d461034f5b8ddcf3c99b2\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 14 23:54:31.868372 containerd[1458]: time="2025-05-14T23:54:31.868328504Z" level=info msg="CreateContainer within sandbox \"310ff5fb55410f4fa79f80560079d320cec907dd8b9d461034f5b8ddcf3c99b2\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"a1e8c92bab304e14240e0002c8aa78e2eab6ff36abbf770e202a39a1971ed8c3\"" May 14 23:54:31.869083 containerd[1458]: time="2025-05-14T23:54:31.869052163Z" level=info msg="StartContainer for \"a1e8c92bab304e14240e0002c8aa78e2eab6ff36abbf770e202a39a1971ed8c3\"" May 14 23:54:31.903722 systemd[1]: Started cri-containerd-a1e8c92bab304e14240e0002c8aa78e2eab6ff36abbf770e202a39a1971ed8c3.scope - libcontainer container a1e8c92bab304e14240e0002c8aa78e2eab6ff36abbf770e202a39a1971ed8c3. May 14 23:54:31.931849 containerd[1458]: time="2025-05-14T23:54:31.931797416Z" level=info msg="StartContainer for \"a1e8c92bab304e14240e0002c8aa78e2eab6ff36abbf770e202a39a1971ed8c3\" returns successfully" May 14 23:54:32.448226 systemd[1]: cri-containerd-a1e8c92bab304e14240e0002c8aa78e2eab6ff36abbf770e202a39a1971ed8c3.scope: Deactivated successfully. May 14 23:54:32.448493 systemd[1]: cri-containerd-a1e8c92bab304e14240e0002c8aa78e2eab6ff36abbf770e202a39a1971ed8c3.scope: Consumed 447ms CPU time, 157.9M memory peak, 4K read from disk, 150.3M written to disk. May 14 23:54:32.464455 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a1e8c92bab304e14240e0002c8aa78e2eab6ff36abbf770e202a39a1971ed8c3-rootfs.mount: Deactivated successfully. May 14 23:54:32.539732 kubelet[2636]: I0514 23:54:32.539704 2636 kubelet_node_status.go:497] "Fast updating node status as it just became ready" May 14 23:54:32.564673 kubelet[2636]: I0514 23:54:32.564629 2636 topology_manager.go:215] "Topology Admit Handler" podUID="5296ee56-ba99-4940-a157-166e05449d2c" podNamespace="kube-system" podName="coredns-7db6d8ff4d-m9489" May 14 23:54:32.566912 kubelet[2636]: I0514 23:54:32.566877 2636 topology_manager.go:215] "Topology Admit Handler" podUID="f03276de-018a-4c35-8ec5-8ee9060e84e4" podNamespace="kube-system" podName="coredns-7db6d8ff4d-b47sh" May 14 23:54:32.567615 kubelet[2636]: I0514 23:54:32.567591 2636 topology_manager.go:215] "Topology Admit Handler" podUID="8967758b-5095-4f47-a879-8fbab63daefc" podNamespace="calico-system" podName="calico-kube-controllers-55976b5db9-6xq9g" May 14 23:54:32.573878 kubelet[2636]: I0514 23:54:32.573847 2636 topology_manager.go:215] "Topology Admit Handler" podUID="dbd41afb-946c-4f95-a556-b712c9bfb043" podNamespace="calico-apiserver" podName="calico-apiserver-948cf5cf6-mfgt6" May 14 23:54:32.574156 kubelet[2636]: I0514 23:54:32.574120 2636 topology_manager.go:215] "Topology Admit Handler" podUID="e2130025-45f8-46c5-b0f8-d9cf000b93a0" podNamespace="calico-apiserver" podName="calico-apiserver-948cf5cf6-nxpmq" May 14 23:54:32.574361 systemd[1]: Created slice kubepods-burstable-pod5296ee56_ba99_4940_a157_166e05449d2c.slice - libcontainer container kubepods-burstable-pod5296ee56_ba99_4940_a157_166e05449d2c.slice. May 14 23:54:32.576171 containerd[1458]: time="2025-05-14T23:54:32.576052487Z" level=info msg="shim disconnected" id=a1e8c92bab304e14240e0002c8aa78e2eab6ff36abbf770e202a39a1971ed8c3 namespace=k8s.io May 14 23:54:32.576171 containerd[1458]: time="2025-05-14T23:54:32.576114086Z" level=warning msg="cleaning up after shim disconnected" id=a1e8c92bab304e14240e0002c8aa78e2eab6ff36abbf770e202a39a1971ed8c3 namespace=k8s.io May 14 23:54:32.576171 containerd[1458]: time="2025-05-14T23:54:32.576124525Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 14 23:54:32.585161 systemd[1]: Created slice kubepods-burstable-podf03276de_018a_4c35_8ec5_8ee9060e84e4.slice - libcontainer container kubepods-burstable-podf03276de_018a_4c35_8ec5_8ee9060e84e4.slice. May 14 23:54:32.593460 systemd[1]: Created slice kubepods-besteffort-pod8967758b_5095_4f47_a879_8fbab63daefc.slice - libcontainer container kubepods-besteffort-pod8967758b_5095_4f47_a879_8fbab63daefc.slice. May 14 23:54:32.598839 containerd[1458]: time="2025-05-14T23:54:32.598720558Z" level=warning msg="cleanup warnings time=\"2025-05-14T23:54:32Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io May 14 23:54:32.599927 systemd[1]: Created slice kubepods-besteffort-poddbd41afb_946c_4f95_a556_b712c9bfb043.slice - libcontainer container kubepods-besteffort-poddbd41afb_946c_4f95_a556_b712c9bfb043.slice. May 14 23:54:32.606051 systemd[1]: Created slice kubepods-besteffort-pode2130025_45f8_46c5_b0f8_d9cf000b93a0.slice - libcontainer container kubepods-besteffort-pode2130025_45f8_46c5_b0f8_d9cf000b93a0.slice. May 14 23:54:32.639207 containerd[1458]: time="2025-05-14T23:54:32.639170200Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\"" May 14 23:54:32.719297 kubelet[2636]: I0514 23:54:32.719181 2636 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8n94\" (UniqueName: \"kubernetes.io/projected/dbd41afb-946c-4f95-a556-b712c9bfb043-kube-api-access-z8n94\") pod \"calico-apiserver-948cf5cf6-mfgt6\" (UID: \"dbd41afb-946c-4f95-a556-b712c9bfb043\") " pod="calico-apiserver/calico-apiserver-948cf5cf6-mfgt6" May 14 23:54:32.719297 kubelet[2636]: I0514 23:54:32.719224 2636 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8967758b-5095-4f47-a879-8fbab63daefc-tigera-ca-bundle\") pod \"calico-kube-controllers-55976b5db9-6xq9g\" (UID: \"8967758b-5095-4f47-a879-8fbab63daefc\") " pod="calico-system/calico-kube-controllers-55976b5db9-6xq9g" May 14 23:54:32.719297 kubelet[2636]: I0514 23:54:32.719242 2636 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-flxzc\" (UniqueName: \"kubernetes.io/projected/8967758b-5095-4f47-a879-8fbab63daefc-kube-api-access-flxzc\") pod \"calico-kube-controllers-55976b5db9-6xq9g\" (UID: \"8967758b-5095-4f47-a879-8fbab63daefc\") " pod="calico-system/calico-kube-controllers-55976b5db9-6xq9g" May 14 23:54:32.719297 kubelet[2636]: I0514 23:54:32.719264 2636 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e2130025-45f8-46c5-b0f8-d9cf000b93a0-calico-apiserver-certs\") pod \"calico-apiserver-948cf5cf6-nxpmq\" (UID: \"e2130025-45f8-46c5-b0f8-d9cf000b93a0\") " pod="calico-apiserver/calico-apiserver-948cf5cf6-nxpmq" May 14 23:54:32.719860 kubelet[2636]: I0514 23:54:32.719281 2636 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jx6mt\" (UniqueName: \"kubernetes.io/projected/5296ee56-ba99-4940-a157-166e05449d2c-kube-api-access-jx6mt\") pod \"coredns-7db6d8ff4d-m9489\" (UID: \"5296ee56-ba99-4940-a157-166e05449d2c\") " pod="kube-system/coredns-7db6d8ff4d-m9489" May 14 23:54:32.719860 kubelet[2636]: I0514 23:54:32.719799 2636 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/dbd41afb-946c-4f95-a556-b712c9bfb043-calico-apiserver-certs\") pod \"calico-apiserver-948cf5cf6-mfgt6\" (UID: \"dbd41afb-946c-4f95-a556-b712c9bfb043\") " pod="calico-apiserver/calico-apiserver-948cf5cf6-mfgt6" May 14 23:54:32.719860 kubelet[2636]: I0514 23:54:32.719819 2636 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-86kfg\" (UniqueName: \"kubernetes.io/projected/f03276de-018a-4c35-8ec5-8ee9060e84e4-kube-api-access-86kfg\") pod \"coredns-7db6d8ff4d-b47sh\" (UID: \"f03276de-018a-4c35-8ec5-8ee9060e84e4\") " pod="kube-system/coredns-7db6d8ff4d-b47sh" May 14 23:54:32.719860 kubelet[2636]: I0514 23:54:32.719835 2636 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vnz72\" (UniqueName: \"kubernetes.io/projected/e2130025-45f8-46c5-b0f8-d9cf000b93a0-kube-api-access-vnz72\") pod \"calico-apiserver-948cf5cf6-nxpmq\" (UID: \"e2130025-45f8-46c5-b0f8-d9cf000b93a0\") " pod="calico-apiserver/calico-apiserver-948cf5cf6-nxpmq" May 14 23:54:32.719860 kubelet[2636]: I0514 23:54:32.719860 2636 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5296ee56-ba99-4940-a157-166e05449d2c-config-volume\") pod \"coredns-7db6d8ff4d-m9489\" (UID: \"5296ee56-ba99-4940-a157-166e05449d2c\") " pod="kube-system/coredns-7db6d8ff4d-m9489" May 14 23:54:32.720207 kubelet[2636]: I0514 23:54:32.719878 2636 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f03276de-018a-4c35-8ec5-8ee9060e84e4-config-volume\") pod \"coredns-7db6d8ff4d-b47sh\" (UID: \"f03276de-018a-4c35-8ec5-8ee9060e84e4\") " pod="kube-system/coredns-7db6d8ff4d-b47sh" May 14 23:54:32.882006 containerd[1458]: time="2025-05-14T23:54:32.881971086Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-m9489,Uid:5296ee56-ba99-4940-a157-166e05449d2c,Namespace:kube-system,Attempt:0,}" May 14 23:54:32.890804 containerd[1458]: time="2025-05-14T23:54:32.890700316Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-b47sh,Uid:f03276de-018a-4c35-8ec5-8ee9060e84e4,Namespace:kube-system,Attempt:0,}" May 14 23:54:32.898893 containerd[1458]: time="2025-05-14T23:54:32.898810243Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-55976b5db9-6xq9g,Uid:8967758b-5095-4f47-a879-8fbab63daefc,Namespace:calico-system,Attempt:0,}" May 14 23:54:32.906450 containerd[1458]: time="2025-05-14T23:54:32.906226471Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-948cf5cf6-mfgt6,Uid:dbd41afb-946c-4f95-a556-b712c9bfb043,Namespace:calico-apiserver,Attempt:0,}" May 14 23:54:32.912981 containerd[1458]: time="2025-05-14T23:54:32.912948358Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-948cf5cf6-nxpmq,Uid:e2130025-45f8-46c5-b0f8-d9cf000b93a0,Namespace:calico-apiserver,Attempt:0,}" May 14 23:54:33.316881 containerd[1458]: time="2025-05-14T23:54:33.316659206Z" level=error msg="Failed to destroy network for sandbox \"7baa3219b9d67adfc4dc62350bf1c4169d2538001b9b6678280842b71c445e26\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:54:33.326223 containerd[1458]: time="2025-05-14T23:54:33.326154704Z" level=error msg="encountered an error cleaning up failed sandbox \"7baa3219b9d67adfc4dc62350bf1c4169d2538001b9b6678280842b71c445e26\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:54:33.326346 containerd[1458]: time="2025-05-14T23:54:33.326251301Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-948cf5cf6-mfgt6,Uid:dbd41afb-946c-4f95-a556-b712c9bfb043,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7baa3219b9d67adfc4dc62350bf1c4169d2538001b9b6678280842b71c445e26\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:54:33.330566 kubelet[2636]: E0514 23:54:33.329984 2636 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7baa3219b9d67adfc4dc62350bf1c4169d2538001b9b6678280842b71c445e26\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:54:33.330566 kubelet[2636]: E0514 23:54:33.330115 2636 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7baa3219b9d67adfc4dc62350bf1c4169d2538001b9b6678280842b71c445e26\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-948cf5cf6-mfgt6" May 14 23:54:33.330566 kubelet[2636]: E0514 23:54:33.330152 2636 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7baa3219b9d67adfc4dc62350bf1c4169d2538001b9b6678280842b71c445e26\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-948cf5cf6-mfgt6" May 14 23:54:33.330752 kubelet[2636]: E0514 23:54:33.330194 2636 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-948cf5cf6-mfgt6_calico-apiserver(dbd41afb-946c-4f95-a556-b712c9bfb043)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-948cf5cf6-mfgt6_calico-apiserver(dbd41afb-946c-4f95-a556-b712c9bfb043)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7baa3219b9d67adfc4dc62350bf1c4169d2538001b9b6678280842b71c445e26\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-948cf5cf6-mfgt6" podUID="dbd41afb-946c-4f95-a556-b712c9bfb043" May 14 23:54:33.331483 containerd[1458]: time="2025-05-14T23:54:33.330922932Z" level=error msg="Failed to destroy network for sandbox \"411442b62549cd703144cf787d59ef4468de0c18b262472a2ba1f8b0af7ab32f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:54:33.332322 containerd[1458]: time="2025-05-14T23:54:33.332248176Z" level=error msg="encountered an error cleaning up failed sandbox \"411442b62549cd703144cf787d59ef4468de0c18b262472a2ba1f8b0af7ab32f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:54:33.332398 containerd[1458]: time="2025-05-14T23:54:33.332335413Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-m9489,Uid:5296ee56-ba99-4940-a157-166e05449d2c,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"411442b62549cd703144cf787d59ef4468de0c18b262472a2ba1f8b0af7ab32f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:54:33.332604 containerd[1458]: time="2025-05-14T23:54:33.332527208Z" level=error msg="Failed to destroy network for sandbox \"33300e42d9dd9e82c101350186c488335e8fd17e979f85ac1b8f8c792dccbae8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:54:33.332670 kubelet[2636]: E0514 23:54:33.332623 2636 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"411442b62549cd703144cf787d59ef4468de0c18b262472a2ba1f8b0af7ab32f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:54:33.332706 kubelet[2636]: E0514 23:54:33.332688 2636 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"411442b62549cd703144cf787d59ef4468de0c18b262472a2ba1f8b0af7ab32f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-m9489" May 14 23:54:33.332731 kubelet[2636]: E0514 23:54:33.332705 2636 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"411442b62549cd703144cf787d59ef4468de0c18b262472a2ba1f8b0af7ab32f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-m9489" May 14 23:54:33.332760 kubelet[2636]: E0514 23:54:33.332737 2636 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-m9489_kube-system(5296ee56-ba99-4940-a157-166e05449d2c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-m9489_kube-system(5296ee56-ba99-4940-a157-166e05449d2c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"411442b62549cd703144cf787d59ef4468de0c18b262472a2ba1f8b0af7ab32f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-m9489" podUID="5296ee56-ba99-4940-a157-166e05449d2c" May 14 23:54:33.332929 containerd[1458]: time="2025-05-14T23:54:33.332894958Z" level=error msg="encountered an error cleaning up failed sandbox \"33300e42d9dd9e82c101350186c488335e8fd17e979f85ac1b8f8c792dccbae8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:54:33.332993 containerd[1458]: time="2025-05-14T23:54:33.332949877Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-55976b5db9-6xq9g,Uid:8967758b-5095-4f47-a879-8fbab63daefc,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"33300e42d9dd9e82c101350186c488335e8fd17e979f85ac1b8f8c792dccbae8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:54:33.333317 kubelet[2636]: E0514 23:54:33.333135 2636 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"33300e42d9dd9e82c101350186c488335e8fd17e979f85ac1b8f8c792dccbae8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:54:33.333317 kubelet[2636]: E0514 23:54:33.333189 2636 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"33300e42d9dd9e82c101350186c488335e8fd17e979f85ac1b8f8c792dccbae8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-55976b5db9-6xq9g" May 14 23:54:33.333317 kubelet[2636]: E0514 23:54:33.333243 2636 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"33300e42d9dd9e82c101350186c488335e8fd17e979f85ac1b8f8c792dccbae8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-55976b5db9-6xq9g" May 14 23:54:33.333431 kubelet[2636]: E0514 23:54:33.333281 2636 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-55976b5db9-6xq9g_calico-system(8967758b-5095-4f47-a879-8fbab63daefc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-55976b5db9-6xq9g_calico-system(8967758b-5095-4f47-a879-8fbab63daefc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"33300e42d9dd9e82c101350186c488335e8fd17e979f85ac1b8f8c792dccbae8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-55976b5db9-6xq9g" podUID="8967758b-5095-4f47-a879-8fbab63daefc" May 14 23:54:33.339731 containerd[1458]: time="2025-05-14T23:54:33.339628292Z" level=error msg="Failed to destroy network for sandbox \"2acec14f0cbe05c7aa548bdcd1bc51128c3959fd460322326d86d668c75644a6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:54:33.340552 containerd[1458]: time="2025-05-14T23:54:33.340505388Z" level=error msg="encountered an error cleaning up failed sandbox \"2acec14f0cbe05c7aa548bdcd1bc51128c3959fd460322326d86d668c75644a6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:54:33.340630 containerd[1458]: time="2025-05-14T23:54:33.340585146Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-b47sh,Uid:f03276de-018a-4c35-8ec5-8ee9060e84e4,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2acec14f0cbe05c7aa548bdcd1bc51128c3959fd460322326d86d668c75644a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:54:33.340809 kubelet[2636]: E0514 23:54:33.340762 2636 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2acec14f0cbe05c7aa548bdcd1bc51128c3959fd460322326d86d668c75644a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:54:33.340858 kubelet[2636]: E0514 23:54:33.340822 2636 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2acec14f0cbe05c7aa548bdcd1bc51128c3959fd460322326d86d668c75644a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-b47sh" May 14 23:54:33.340858 kubelet[2636]: E0514 23:54:33.340840 2636 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2acec14f0cbe05c7aa548bdcd1bc51128c3959fd460322326d86d668c75644a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-b47sh" May 14 23:54:33.340916 kubelet[2636]: E0514 23:54:33.340875 2636 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-b47sh_kube-system(f03276de-018a-4c35-8ec5-8ee9060e84e4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-b47sh_kube-system(f03276de-018a-4c35-8ec5-8ee9060e84e4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2acec14f0cbe05c7aa548bdcd1bc51128c3959fd460322326d86d668c75644a6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-b47sh" podUID="f03276de-018a-4c35-8ec5-8ee9060e84e4" May 14 23:54:33.343504 containerd[1458]: time="2025-05-14T23:54:33.342917041Z" level=error msg="Failed to destroy network for sandbox \"03bf40f226371819ebf2af59750aec491c2dd2ab8a0480f6dccfa7c8e4859ac4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:54:33.343504 containerd[1458]: time="2025-05-14T23:54:33.343370109Z" level=error msg="encountered an error cleaning up failed sandbox \"03bf40f226371819ebf2af59750aec491c2dd2ab8a0480f6dccfa7c8e4859ac4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:54:33.343504 containerd[1458]: time="2025-05-14T23:54:33.343421308Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-948cf5cf6-nxpmq,Uid:e2130025-45f8-46c5-b0f8-d9cf000b93a0,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"03bf40f226371819ebf2af59750aec491c2dd2ab8a0480f6dccfa7c8e4859ac4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:54:33.343679 kubelet[2636]: E0514 23:54:33.343587 2636 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"03bf40f226371819ebf2af59750aec491c2dd2ab8a0480f6dccfa7c8e4859ac4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:54:33.343679 kubelet[2636]: E0514 23:54:33.343636 2636 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"03bf40f226371819ebf2af59750aec491c2dd2ab8a0480f6dccfa7c8e4859ac4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-948cf5cf6-nxpmq" May 14 23:54:33.343679 kubelet[2636]: E0514 23:54:33.343651 2636 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"03bf40f226371819ebf2af59750aec491c2dd2ab8a0480f6dccfa7c8e4859ac4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-948cf5cf6-nxpmq" May 14 23:54:33.344231 kubelet[2636]: E0514 23:54:33.344187 2636 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-948cf5cf6-nxpmq_calico-apiserver(e2130025-45f8-46c5-b0f8-d9cf000b93a0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-948cf5cf6-nxpmq_calico-apiserver(e2130025-45f8-46c5-b0f8-d9cf000b93a0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"03bf40f226371819ebf2af59750aec491c2dd2ab8a0480f6dccfa7c8e4859ac4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-948cf5cf6-nxpmq" podUID="e2130025-45f8-46c5-b0f8-d9cf000b93a0" May 14 23:54:33.551001 systemd[1]: Created slice kubepods-besteffort-pod78b8ecbd_ebea_41ed_a71c_8ddd96e45e21.slice - libcontainer container kubepods-besteffort-pod78b8ecbd_ebea_41ed_a71c_8ddd96e45e21.slice. May 14 23:54:33.557593 containerd[1458]: time="2025-05-14T23:54:33.557444361Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xq692,Uid:78b8ecbd-ebea-41ed-a71c-8ddd96e45e21,Namespace:calico-system,Attempt:0,}" May 14 23:54:33.632651 containerd[1458]: time="2025-05-14T23:54:33.632316175Z" level=error msg="Failed to destroy network for sandbox \"87ca9c151aa61190fc4680342f5a4b03631cdb8646efa58c4ac2b140adfb783c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:54:33.632917 containerd[1458]: time="2025-05-14T23:54:33.632853240Z" level=error msg="encountered an error cleaning up failed sandbox \"87ca9c151aa61190fc4680342f5a4b03631cdb8646efa58c4ac2b140adfb783c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:54:33.632956 containerd[1458]: time="2025-05-14T23:54:33.632918799Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xq692,Uid:78b8ecbd-ebea-41ed-a71c-8ddd96e45e21,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"87ca9c151aa61190fc4680342f5a4b03631cdb8646efa58c4ac2b140adfb783c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:54:33.633174 kubelet[2636]: E0514 23:54:33.633142 2636 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"87ca9c151aa61190fc4680342f5a4b03631cdb8646efa58c4ac2b140adfb783c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:54:33.637225 kubelet[2636]: E0514 23:54:33.633195 2636 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"87ca9c151aa61190fc4680342f5a4b03631cdb8646efa58c4ac2b140adfb783c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-xq692" May 14 23:54:33.637225 kubelet[2636]: E0514 23:54:33.633225 2636 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"87ca9c151aa61190fc4680342f5a4b03631cdb8646efa58c4ac2b140adfb783c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-xq692" May 14 23:54:33.637225 kubelet[2636]: E0514 23:54:33.633275 2636 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-xq692_calico-system(78b8ecbd-ebea-41ed-a71c-8ddd96e45e21)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-xq692_calico-system(78b8ecbd-ebea-41ed-a71c-8ddd96e45e21)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"87ca9c151aa61190fc4680342f5a4b03631cdb8646efa58c4ac2b140adfb783c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-xq692" podUID="78b8ecbd-ebea-41ed-a71c-8ddd96e45e21" May 14 23:54:33.649688 kubelet[2636]: I0514 23:54:33.649652 2636 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="03bf40f226371819ebf2af59750aec491c2dd2ab8a0480f6dccfa7c8e4859ac4" May 14 23:54:33.650794 containerd[1458]: time="2025-05-14T23:54:33.650698148Z" level=info msg="StopPodSandbox for \"03bf40f226371819ebf2af59750aec491c2dd2ab8a0480f6dccfa7c8e4859ac4\"" May 14 23:54:33.650977 containerd[1458]: time="2025-05-14T23:54:33.650930661Z" level=info msg="Ensure that sandbox 03bf40f226371819ebf2af59750aec491c2dd2ab8a0480f6dccfa7c8e4859ac4 in task-service has been cleanup successfully" May 14 23:54:33.654650 containerd[1458]: time="2025-05-14T23:54:33.654601920Z" level=info msg="TearDown network for sandbox \"03bf40f226371819ebf2af59750aec491c2dd2ab8a0480f6dccfa7c8e4859ac4\" successfully" May 14 23:54:33.654650 containerd[1458]: time="2025-05-14T23:54:33.654648319Z" level=info msg="StopPodSandbox for \"03bf40f226371819ebf2af59750aec491c2dd2ab8a0480f6dccfa7c8e4859ac4\" returns successfully" May 14 23:54:33.657687 kubelet[2636]: I0514 23:54:33.657592 2636 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="33300e42d9dd9e82c101350186c488335e8fd17e979f85ac1b8f8c792dccbae8" May 14 23:54:33.658615 containerd[1458]: time="2025-05-14T23:54:33.658402455Z" level=info msg="StopPodSandbox for \"33300e42d9dd9e82c101350186c488335e8fd17e979f85ac1b8f8c792dccbae8\"" May 14 23:54:33.660621 containerd[1458]: time="2025-05-14T23:54:33.658894282Z" level=info msg="Ensure that sandbox 33300e42d9dd9e82c101350186c488335e8fd17e979f85ac1b8f8c792dccbae8 in task-service has been cleanup successfully" May 14 23:54:33.660681 kubelet[2636]: I0514 23:54:33.659901 2636 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2acec14f0cbe05c7aa548bdcd1bc51128c3959fd460322326d86d668c75644a6" May 14 23:54:33.660741 containerd[1458]: time="2025-05-14T23:54:33.660671793Z" level=info msg="StopPodSandbox for \"2acec14f0cbe05c7aa548bdcd1bc51128c3959fd460322326d86d668c75644a6\"" May 14 23:54:33.660905 containerd[1458]: time="2025-05-14T23:54:33.660871427Z" level=info msg="Ensure that sandbox 2acec14f0cbe05c7aa548bdcd1bc51128c3959fd460322326d86d668c75644a6 in task-service has been cleanup successfully" May 14 23:54:33.661714 containerd[1458]: time="2025-05-14T23:54:33.661662125Z" level=info msg="TearDown network for sandbox \"33300e42d9dd9e82c101350186c488335e8fd17e979f85ac1b8f8c792dccbae8\" successfully" May 14 23:54:33.662695 containerd[1458]: time="2025-05-14T23:54:33.661698724Z" level=info msg="StopPodSandbox for \"33300e42d9dd9e82c101350186c488335e8fd17e979f85ac1b8f8c792dccbae8\" returns successfully" May 14 23:54:33.664168 containerd[1458]: time="2025-05-14T23:54:33.663851585Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-55976b5db9-6xq9g,Uid:8967758b-5095-4f47-a879-8fbab63daefc,Namespace:calico-system,Attempt:1,}" May 14 23:54:33.665369 containerd[1458]: time="2025-05-14T23:54:33.664544526Z" level=info msg="TearDown network for sandbox \"2acec14f0cbe05c7aa548bdcd1bc51128c3959fd460322326d86d668c75644a6\" successfully" May 14 23:54:33.665369 containerd[1458]: time="2025-05-14T23:54:33.664572445Z" level=info msg="StopPodSandbox for \"2acec14f0cbe05c7aa548bdcd1bc51128c3959fd460322326d86d668c75644a6\" returns successfully" May 14 23:54:33.665369 containerd[1458]: time="2025-05-14T23:54:33.665067871Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-b47sh,Uid:f03276de-018a-4c35-8ec5-8ee9060e84e4,Namespace:kube-system,Attempt:1,}" May 14 23:54:33.665492 kubelet[2636]: I0514 23:54:33.665476 2636 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="87ca9c151aa61190fc4680342f5a4b03631cdb8646efa58c4ac2b140adfb783c" May 14 23:54:33.666046 containerd[1458]: time="2025-05-14T23:54:33.666003806Z" level=info msg="StopPodSandbox for \"87ca9c151aa61190fc4680342f5a4b03631cdb8646efa58c4ac2b140adfb783c\"" May 14 23:54:33.666755 containerd[1458]: time="2025-05-14T23:54:33.666192160Z" level=info msg="Ensure that sandbox 87ca9c151aa61190fc4680342f5a4b03631cdb8646efa58c4ac2b140adfb783c in task-service has been cleanup successfully" May 14 23:54:33.667518 kubelet[2636]: I0514 23:54:33.667491 2636 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7baa3219b9d67adfc4dc62350bf1c4169d2538001b9b6678280842b71c445e26" May 14 23:54:33.668043 containerd[1458]: time="2025-05-14T23:54:33.667994831Z" level=info msg="TearDown network for sandbox \"87ca9c151aa61190fc4680342f5a4b03631cdb8646efa58c4ac2b140adfb783c\" successfully" May 14 23:54:33.668043 containerd[1458]: time="2025-05-14T23:54:33.668024750Z" level=info msg="StopPodSandbox for \"87ca9c151aa61190fc4680342f5a4b03631cdb8646efa58c4ac2b140adfb783c\" returns successfully" May 14 23:54:33.668353 containerd[1458]: time="2025-05-14T23:54:33.668329581Z" level=info msg="StopPodSandbox for \"7baa3219b9d67adfc4dc62350bf1c4169d2538001b9b6678280842b71c445e26\"" May 14 23:54:33.668547 containerd[1458]: time="2025-05-14T23:54:33.668486097Z" level=info msg="Ensure that sandbox 7baa3219b9d67adfc4dc62350bf1c4169d2538001b9b6678280842b71c445e26 in task-service has been cleanup successfully" May 14 23:54:33.669387 containerd[1458]: time="2025-05-14T23:54:33.669355793Z" level=info msg="TearDown network for sandbox \"7baa3219b9d67adfc4dc62350bf1c4169d2538001b9b6678280842b71c445e26\" successfully" May 14 23:54:33.669387 containerd[1458]: time="2025-05-14T23:54:33.669387432Z" level=info msg="StopPodSandbox for \"7baa3219b9d67adfc4dc62350bf1c4169d2538001b9b6678280842b71c445e26\" returns successfully" May 14 23:54:33.669655 containerd[1458]: time="2025-05-14T23:54:33.669631305Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-948cf5cf6-nxpmq,Uid:e2130025-45f8-46c5-b0f8-d9cf000b93a0,Namespace:calico-apiserver,Attempt:1,}" May 14 23:54:33.671291 containerd[1458]: time="2025-05-14T23:54:33.671259540Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-948cf5cf6-mfgt6,Uid:dbd41afb-946c-4f95-a556-b712c9bfb043,Namespace:calico-apiserver,Attempt:1,}" May 14 23:54:33.672113 kubelet[2636]: I0514 23:54:33.671992 2636 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="411442b62549cd703144cf787d59ef4468de0c18b262472a2ba1f8b0af7ab32f" May 14 23:54:33.681012 containerd[1458]: time="2025-05-14T23:54:33.680959433Z" level=info msg="StopPodSandbox for \"411442b62549cd703144cf787d59ef4468de0c18b262472a2ba1f8b0af7ab32f\"" May 14 23:54:33.681826 containerd[1458]: time="2025-05-14T23:54:33.681158907Z" level=info msg="Ensure that sandbox 411442b62549cd703144cf787d59ef4468de0c18b262472a2ba1f8b0af7ab32f in task-service has been cleanup successfully" May 14 23:54:33.681826 containerd[1458]: time="2025-05-14T23:54:33.681670853Z" level=info msg="TearDown network for sandbox \"411442b62549cd703144cf787d59ef4468de0c18b262472a2ba1f8b0af7ab32f\" successfully" May 14 23:54:33.681826 containerd[1458]: time="2025-05-14T23:54:33.681687253Z" level=info msg="StopPodSandbox for \"411442b62549cd703144cf787d59ef4468de0c18b262472a2ba1f8b0af7ab32f\" returns successfully" May 14 23:54:33.684392 containerd[1458]: time="2025-05-14T23:54:33.684130385Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-m9489,Uid:5296ee56-ba99-4940-a157-166e05449d2c,Namespace:kube-system,Attempt:1,}" May 14 23:54:33.688397 containerd[1458]: time="2025-05-14T23:54:33.688357709Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xq692,Uid:78b8ecbd-ebea-41ed-a71c-8ddd96e45e21,Namespace:calico-system,Attempt:1,}" May 14 23:54:33.869155 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7baa3219b9d67adfc4dc62350bf1c4169d2538001b9b6678280842b71c445e26-shm.mount: Deactivated successfully. May 14 23:54:33.869907 systemd[1]: run-netns-cni\x2d8db7bb85\x2df789\x2d0ab5\x2d9442\x2df3964775b84e.mount: Deactivated successfully. May 14 23:54:33.869960 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2acec14f0cbe05c7aa548bdcd1bc51128c3959fd460322326d86d668c75644a6-shm.mount: Deactivated successfully. May 14 23:54:33.870007 systemd[1]: run-netns-cni\x2d8b7bdbc5\x2d692b\x2dc3fd\x2d0741\x2d91e8f99a5e5e.mount: Deactivated successfully. May 14 23:54:33.870053 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-411442b62549cd703144cf787d59ef4468de0c18b262472a2ba1f8b0af7ab32f-shm.mount: Deactivated successfully. May 14 23:54:33.873729 containerd[1458]: time="2025-05-14T23:54:33.873579117Z" level=error msg="Failed to destroy network for sandbox \"9ea9ec64e4bb1d03aff564342d1365b5c979d1b4c871a06f698e17f49c805e09\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:54:33.877337 containerd[1458]: time="2025-05-14T23:54:33.875987171Z" level=error msg="encountered an error cleaning up failed sandbox \"9ea9ec64e4bb1d03aff564342d1365b5c979d1b4c871a06f698e17f49c805e09\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:54:33.877337 containerd[1458]: time="2025-05-14T23:54:33.876257803Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-55976b5db9-6xq9g,Uid:8967758b-5095-4f47-a879-8fbab63daefc,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"9ea9ec64e4bb1d03aff564342d1365b5c979d1b4c871a06f698e17f49c805e09\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:54:33.876733 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9ea9ec64e4bb1d03aff564342d1365b5c979d1b4c871a06f698e17f49c805e09-shm.mount: Deactivated successfully. May 14 23:54:33.877572 kubelet[2636]: E0514 23:54:33.876517 2636 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9ea9ec64e4bb1d03aff564342d1365b5c979d1b4c871a06f698e17f49c805e09\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:54:33.877572 kubelet[2636]: E0514 23:54:33.876580 2636 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9ea9ec64e4bb1d03aff564342d1365b5c979d1b4c871a06f698e17f49c805e09\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-55976b5db9-6xq9g" May 14 23:54:33.877572 kubelet[2636]: E0514 23:54:33.876598 2636 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9ea9ec64e4bb1d03aff564342d1365b5c979d1b4c871a06f698e17f49c805e09\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-55976b5db9-6xq9g" May 14 23:54:33.877685 kubelet[2636]: E0514 23:54:33.876637 2636 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-55976b5db9-6xq9g_calico-system(8967758b-5095-4f47-a879-8fbab63daefc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-55976b5db9-6xq9g_calico-system(8967758b-5095-4f47-a879-8fbab63daefc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9ea9ec64e4bb1d03aff564342d1365b5c979d1b4c871a06f698e17f49c805e09\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-55976b5db9-6xq9g" podUID="8967758b-5095-4f47-a879-8fbab63daefc" May 14 23:54:33.920684 containerd[1458]: time="2025-05-14T23:54:33.907254308Z" level=error msg="Failed to destroy network for sandbox \"f4af3726f952fd76dc5fff58c58b1d7cce343f4d0d7c542795ad3cf3a2f8dd4b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:54:33.920684 containerd[1458]: time="2025-05-14T23:54:33.915836311Z" level=error msg="encountered an error cleaning up failed sandbox \"f4af3726f952fd76dc5fff58c58b1d7cce343f4d0d7c542795ad3cf3a2f8dd4b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:54:33.920684 containerd[1458]: time="2025-05-14T23:54:33.915912309Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-948cf5cf6-nxpmq,Uid:e2130025-45f8-46c5-b0f8-d9cf000b93a0,Namespace:calico-apiserver,Attempt:1,} failed, error" error="failed to setup network for sandbox \"f4af3726f952fd76dc5fff58c58b1d7cce343f4d0d7c542795ad3cf3a2f8dd4b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:54:33.921665 kubelet[2636]: E0514 23:54:33.916190 2636 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f4af3726f952fd76dc5fff58c58b1d7cce343f4d0d7c542795ad3cf3a2f8dd4b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:54:33.921665 kubelet[2636]: E0514 23:54:33.916249 2636 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f4af3726f952fd76dc5fff58c58b1d7cce343f4d0d7c542795ad3cf3a2f8dd4b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-948cf5cf6-nxpmq" May 14 23:54:33.921665 kubelet[2636]: E0514 23:54:33.916270 2636 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f4af3726f952fd76dc5fff58c58b1d7cce343f4d0d7c542795ad3cf3a2f8dd4b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-948cf5cf6-nxpmq" May 14 23:54:33.921798 kubelet[2636]: E0514 23:54:33.916313 2636 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-948cf5cf6-nxpmq_calico-apiserver(e2130025-45f8-46c5-b0f8-d9cf000b93a0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-948cf5cf6-nxpmq_calico-apiserver(e2130025-45f8-46c5-b0f8-d9cf000b93a0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f4af3726f952fd76dc5fff58c58b1d7cce343f4d0d7c542795ad3cf3a2f8dd4b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-948cf5cf6-nxpmq" podUID="e2130025-45f8-46c5-b0f8-d9cf000b93a0" May 14 23:54:33.923033 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f4af3726f952fd76dc5fff58c58b1d7cce343f4d0d7c542795ad3cf3a2f8dd4b-shm.mount: Deactivated successfully. May 14 23:54:33.943934 systemd[1]: Started sshd@7-10.0.0.62:22-10.0.0.1:46856.service - OpenSSH per-connection server daemon (10.0.0.1:46856). May 14 23:54:33.993017 containerd[1458]: time="2025-05-14T23:54:33.992960503Z" level=error msg="Failed to destroy network for sandbox \"4e30068388552149fd1dd8a8bcbdeae3234ce1cf01252084cb9fd7e718ae0e43\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:54:33.997666 containerd[1458]: time="2025-05-14T23:54:33.996682120Z" level=error msg="encountered an error cleaning up failed sandbox \"4e30068388552149fd1dd8a8bcbdeae3234ce1cf01252084cb9fd7e718ae0e43\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:54:33.997666 containerd[1458]: time="2025-05-14T23:54:33.996763238Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-b47sh,Uid:f03276de-018a-4c35-8ec5-8ee9060e84e4,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"4e30068388552149fd1dd8a8bcbdeae3234ce1cf01252084cb9fd7e718ae0e43\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:54:33.997800 kubelet[2636]: E0514 23:54:33.997693 2636 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4e30068388552149fd1dd8a8bcbdeae3234ce1cf01252084cb9fd7e718ae0e43\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:54:33.997800 kubelet[2636]: E0514 23:54:33.997755 2636 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4e30068388552149fd1dd8a8bcbdeae3234ce1cf01252084cb9fd7e718ae0e43\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-b47sh" May 14 23:54:33.997800 kubelet[2636]: E0514 23:54:33.997780 2636 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4e30068388552149fd1dd8a8bcbdeae3234ce1cf01252084cb9fd7e718ae0e43\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-b47sh" May 14 23:54:33.997756 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4e30068388552149fd1dd8a8bcbdeae3234ce1cf01252084cb9fd7e718ae0e43-shm.mount: Deactivated successfully. May 14 23:54:33.997995 kubelet[2636]: E0514 23:54:33.997827 2636 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-b47sh_kube-system(f03276de-018a-4c35-8ec5-8ee9060e84e4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-b47sh_kube-system(f03276de-018a-4c35-8ec5-8ee9060e84e4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4e30068388552149fd1dd8a8bcbdeae3234ce1cf01252084cb9fd7e718ae0e43\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-b47sh" podUID="f03276de-018a-4c35-8ec5-8ee9060e84e4" May 14 23:54:34.101592 sshd[3687]: Accepted publickey for core from 10.0.0.1 port 46856 ssh2: RSA SHA256:smlrS2t7mLsANcNMaj8ka2nbLtrYQtOgzTJzuRdYSOU May 14 23:54:34.102876 sshd-session[3687]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:54:34.111861 systemd-logind[1440]: New session 8 of user core. May 14 23:54:34.119732 systemd[1]: Started session-8.scope - Session 8 of User core. May 14 23:54:34.133189 containerd[1458]: time="2025-05-14T23:54:34.133117965Z" level=error msg="Failed to destroy network for sandbox \"fb6ec1ae0429d0a14ae769b7e263fe479f5e616c184463c2a1dcbd8adbdf81d4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:54:34.133864 containerd[1458]: time="2025-05-14T23:54:34.133562793Z" level=error msg="encountered an error cleaning up failed sandbox \"fb6ec1ae0429d0a14ae769b7e263fe479f5e616c184463c2a1dcbd8adbdf81d4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:54:34.133864 containerd[1458]: time="2025-05-14T23:54:34.133625511Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-948cf5cf6-mfgt6,Uid:dbd41afb-946c-4f95-a556-b712c9bfb043,Namespace:calico-apiserver,Attempt:1,} failed, error" error="failed to setup network for sandbox \"fb6ec1ae0429d0a14ae769b7e263fe479f5e616c184463c2a1dcbd8adbdf81d4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:54:34.134661 kubelet[2636]: E0514 23:54:34.134606 2636 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fb6ec1ae0429d0a14ae769b7e263fe479f5e616c184463c2a1dcbd8adbdf81d4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:54:34.135426 kubelet[2636]: E0514 23:54:34.135374 2636 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fb6ec1ae0429d0a14ae769b7e263fe479f5e616c184463c2a1dcbd8adbdf81d4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-948cf5cf6-mfgt6" May 14 23:54:34.137224 kubelet[2636]: E0514 23:54:34.135437 2636 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fb6ec1ae0429d0a14ae769b7e263fe479f5e616c184463c2a1dcbd8adbdf81d4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-948cf5cf6-mfgt6" May 14 23:54:34.137224 kubelet[2636]: E0514 23:54:34.135489 2636 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-948cf5cf6-mfgt6_calico-apiserver(dbd41afb-946c-4f95-a556-b712c9bfb043)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-948cf5cf6-mfgt6_calico-apiserver(dbd41afb-946c-4f95-a556-b712c9bfb043)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fb6ec1ae0429d0a14ae769b7e263fe479f5e616c184463c2a1dcbd8adbdf81d4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-948cf5cf6-mfgt6" podUID="dbd41afb-946c-4f95-a556-b712c9bfb043" May 14 23:54:34.140799 containerd[1458]: time="2025-05-14T23:54:34.140684843Z" level=error msg="Failed to destroy network for sandbox \"6069f7b6005f823a6c0f6bedf541af336a1e9360b68e369bfb47b5702685e544\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:54:34.143151 containerd[1458]: time="2025-05-14T23:54:34.142903144Z" level=error msg="encountered an error cleaning up failed sandbox \"6069f7b6005f823a6c0f6bedf541af336a1e9360b68e369bfb47b5702685e544\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:54:34.143151 containerd[1458]: time="2025-05-14T23:54:34.142975622Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xq692,Uid:78b8ecbd-ebea-41ed-a71c-8ddd96e45e21,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"6069f7b6005f823a6c0f6bedf541af336a1e9360b68e369bfb47b5702685e544\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:54:34.143253 kubelet[2636]: E0514 23:54:34.143189 2636 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6069f7b6005f823a6c0f6bedf541af336a1e9360b68e369bfb47b5702685e544\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:54:34.143296 kubelet[2636]: E0514 23:54:34.143284 2636 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6069f7b6005f823a6c0f6bedf541af336a1e9360b68e369bfb47b5702685e544\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-xq692" May 14 23:54:34.143443 kubelet[2636]: E0514 23:54:34.143304 2636 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6069f7b6005f823a6c0f6bedf541af336a1e9360b68e369bfb47b5702685e544\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-xq692" May 14 23:54:34.143523 kubelet[2636]: E0514 23:54:34.143388 2636 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-xq692_calico-system(78b8ecbd-ebea-41ed-a71c-8ddd96e45e21)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-xq692_calico-system(78b8ecbd-ebea-41ed-a71c-8ddd96e45e21)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6069f7b6005f823a6c0f6bedf541af336a1e9360b68e369bfb47b5702685e544\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-xq692" podUID="78b8ecbd-ebea-41ed-a71c-8ddd96e45e21" May 14 23:54:34.171676 containerd[1458]: time="2025-05-14T23:54:34.171275469Z" level=error msg="Failed to destroy network for sandbox \"a7cff0ac920af035be360364ad9f3490cfa6439ba3727ac87d628b930665cdb4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:54:34.173211 containerd[1458]: time="2025-05-14T23:54:34.172998583Z" level=error msg="encountered an error cleaning up failed sandbox \"a7cff0ac920af035be360364ad9f3490cfa6439ba3727ac87d628b930665cdb4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:54:34.173211 containerd[1458]: time="2025-05-14T23:54:34.173069461Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-m9489,Uid:5296ee56-ba99-4940-a157-166e05449d2c,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"a7cff0ac920af035be360364ad9f3490cfa6439ba3727ac87d628b930665cdb4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:54:34.173900 kubelet[2636]: E0514 23:54:34.173605 2636 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a7cff0ac920af035be360364ad9f3490cfa6439ba3727ac87d628b930665cdb4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:54:34.174664 kubelet[2636]: E0514 23:54:34.174030 2636 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a7cff0ac920af035be360364ad9f3490cfa6439ba3727ac87d628b930665cdb4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-m9489" May 14 23:54:34.174664 kubelet[2636]: E0514 23:54:34.174186 2636 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a7cff0ac920af035be360364ad9f3490cfa6439ba3727ac87d628b930665cdb4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-m9489" May 14 23:54:34.174664 kubelet[2636]: E0514 23:54:34.174251 2636 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-m9489_kube-system(5296ee56-ba99-4940-a157-166e05449d2c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-m9489_kube-system(5296ee56-ba99-4940-a157-166e05449d2c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a7cff0ac920af035be360364ad9f3490cfa6439ba3727ac87d628b930665cdb4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-m9489" podUID="5296ee56-ba99-4940-a157-166e05449d2c" May 14 23:54:34.283439 sshd[3810]: Connection closed by 10.0.0.1 port 46856 May 14 23:54:34.283770 sshd-session[3687]: pam_unix(sshd:session): session closed for user core May 14 23:54:34.287877 systemd-logind[1440]: Session 8 logged out. Waiting for processes to exit. May 14 23:54:34.288866 systemd[1]: sshd@7-10.0.0.62:22-10.0.0.1:46856.service: Deactivated successfully. May 14 23:54:34.292985 systemd[1]: session-8.scope: Deactivated successfully. May 14 23:54:34.295345 systemd-logind[1440]: Removed session 8. May 14 23:54:34.677180 kubelet[2636]: I0514 23:54:34.677145 2636 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9ea9ec64e4bb1d03aff564342d1365b5c979d1b4c871a06f698e17f49c805e09" May 14 23:54:34.677971 containerd[1458]: time="2025-05-14T23:54:34.677920864Z" level=info msg="StopPodSandbox for \"9ea9ec64e4bb1d03aff564342d1365b5c979d1b4c871a06f698e17f49c805e09\"" May 14 23:54:34.678612 containerd[1458]: time="2025-05-14T23:54:34.678583046Z" level=info msg="Ensure that sandbox 9ea9ec64e4bb1d03aff564342d1365b5c979d1b4c871a06f698e17f49c805e09 in task-service has been cleanup successfully" May 14 23:54:34.678782 containerd[1458]: time="2025-05-14T23:54:34.678762961Z" level=info msg="TearDown network for sandbox \"9ea9ec64e4bb1d03aff564342d1365b5c979d1b4c871a06f698e17f49c805e09\" successfully" May 14 23:54:34.678808 containerd[1458]: time="2025-05-14T23:54:34.678780041Z" level=info msg="StopPodSandbox for \"9ea9ec64e4bb1d03aff564342d1365b5c979d1b4c871a06f698e17f49c805e09\" returns successfully" May 14 23:54:34.679436 containerd[1458]: time="2025-05-14T23:54:34.679405664Z" level=info msg="StopPodSandbox for \"33300e42d9dd9e82c101350186c488335e8fd17e979f85ac1b8f8c792dccbae8\"" May 14 23:54:34.679515 containerd[1458]: time="2025-05-14T23:54:34.679497062Z" level=info msg="TearDown network for sandbox \"33300e42d9dd9e82c101350186c488335e8fd17e979f85ac1b8f8c792dccbae8\" successfully" May 14 23:54:34.679515 containerd[1458]: time="2025-05-14T23:54:34.679511541Z" level=info msg="StopPodSandbox for \"33300e42d9dd9e82c101350186c488335e8fd17e979f85ac1b8f8c792dccbae8\" returns successfully" May 14 23:54:34.679849 kubelet[2636]: I0514 23:54:34.679760 2636 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4e30068388552149fd1dd8a8bcbdeae3234ce1cf01252084cb9fd7e718ae0e43" May 14 23:54:34.680551 containerd[1458]: time="2025-05-14T23:54:34.680149364Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-55976b5db9-6xq9g,Uid:8967758b-5095-4f47-a879-8fbab63daefc,Namespace:calico-system,Attempt:2,}" May 14 23:54:34.680551 containerd[1458]: time="2025-05-14T23:54:34.680224402Z" level=info msg="StopPodSandbox for \"4e30068388552149fd1dd8a8bcbdeae3234ce1cf01252084cb9fd7e718ae0e43\"" May 14 23:54:34.680551 containerd[1458]: time="2025-05-14T23:54:34.680369239Z" level=info msg="Ensure that sandbox 4e30068388552149fd1dd8a8bcbdeae3234ce1cf01252084cb9fd7e718ae0e43 in task-service has been cleanup successfully" May 14 23:54:34.680854 containerd[1458]: time="2025-05-14T23:54:34.680563313Z" level=info msg="TearDown network for sandbox \"4e30068388552149fd1dd8a8bcbdeae3234ce1cf01252084cb9fd7e718ae0e43\" successfully" May 14 23:54:34.680854 containerd[1458]: time="2025-05-14T23:54:34.680578793Z" level=info msg="StopPodSandbox for \"4e30068388552149fd1dd8a8bcbdeae3234ce1cf01252084cb9fd7e718ae0e43\" returns successfully" May 14 23:54:34.681299 containerd[1458]: time="2025-05-14T23:54:34.681265695Z" level=info msg="StopPodSandbox for \"2acec14f0cbe05c7aa548bdcd1bc51128c3959fd460322326d86d668c75644a6\"" May 14 23:54:34.681363 containerd[1458]: time="2025-05-14T23:54:34.681346333Z" level=info msg="TearDown network for sandbox \"2acec14f0cbe05c7aa548bdcd1bc51128c3959fd460322326d86d668c75644a6\" successfully" May 14 23:54:34.681399 containerd[1458]: time="2025-05-14T23:54:34.681360492Z" level=info msg="StopPodSandbox for \"2acec14f0cbe05c7aa548bdcd1bc51128c3959fd460322326d86d668c75644a6\" returns successfully" May 14 23:54:34.682095 containerd[1458]: time="2025-05-14T23:54:34.681824480Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-b47sh,Uid:f03276de-018a-4c35-8ec5-8ee9060e84e4,Namespace:kube-system,Attempt:2,}" May 14 23:54:34.682395 kubelet[2636]: I0514 23:54:34.682374 2636 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6069f7b6005f823a6c0f6bedf541af336a1e9360b68e369bfb47b5702685e544" May 14 23:54:34.683673 containerd[1458]: time="2025-05-14T23:54:34.683643511Z" level=info msg="StopPodSandbox for \"6069f7b6005f823a6c0f6bedf541af336a1e9360b68e369bfb47b5702685e544\"" May 14 23:54:34.684638 containerd[1458]: time="2025-05-14T23:54:34.684586846Z" level=info msg="Ensure that sandbox 6069f7b6005f823a6c0f6bedf541af336a1e9360b68e369bfb47b5702685e544 in task-service has been cleanup successfully" May 14 23:54:34.685425 containerd[1458]: time="2025-05-14T23:54:34.685064554Z" level=info msg="TearDown network for sandbox \"6069f7b6005f823a6c0f6bedf541af336a1e9360b68e369bfb47b5702685e544\" successfully" May 14 23:54:34.685425 containerd[1458]: time="2025-05-14T23:54:34.685333986Z" level=info msg="StopPodSandbox for \"6069f7b6005f823a6c0f6bedf541af336a1e9360b68e369bfb47b5702685e544\" returns successfully" May 14 23:54:34.686730 containerd[1458]: time="2025-05-14T23:54:34.686133845Z" level=info msg="StopPodSandbox for \"87ca9c151aa61190fc4680342f5a4b03631cdb8646efa58c4ac2b140adfb783c\"" May 14 23:54:34.686730 containerd[1458]: time="2025-05-14T23:54:34.686210963Z" level=info msg="TearDown network for sandbox \"87ca9c151aa61190fc4680342f5a4b03631cdb8646efa58c4ac2b140adfb783c\" successfully" May 14 23:54:34.686730 containerd[1458]: time="2025-05-14T23:54:34.686221083Z" level=info msg="StopPodSandbox for \"87ca9c151aa61190fc4680342f5a4b03631cdb8646efa58c4ac2b140adfb783c\" returns successfully" May 14 23:54:34.687127 kubelet[2636]: I0514 23:54:34.686585 2636 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fb6ec1ae0429d0a14ae769b7e263fe479f5e616c184463c2a1dcbd8adbdf81d4" May 14 23:54:34.687194 containerd[1458]: time="2025-05-14T23:54:34.686920744Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xq692,Uid:78b8ecbd-ebea-41ed-a71c-8ddd96e45e21,Namespace:calico-system,Attempt:2,}" May 14 23:54:34.688393 containerd[1458]: time="2025-05-14T23:54:34.688363626Z" level=info msg="StopPodSandbox for \"fb6ec1ae0429d0a14ae769b7e263fe479f5e616c184463c2a1dcbd8adbdf81d4\"" May 14 23:54:34.688602 containerd[1458]: time="2025-05-14T23:54:34.688576500Z" level=info msg="Ensure that sandbox fb6ec1ae0429d0a14ae769b7e263fe479f5e616c184463c2a1dcbd8adbdf81d4 in task-service has been cleanup successfully" May 14 23:54:34.688825 containerd[1458]: time="2025-05-14T23:54:34.688744336Z" level=info msg="TearDown network for sandbox \"fb6ec1ae0429d0a14ae769b7e263fe479f5e616c184463c2a1dcbd8adbdf81d4\" successfully" May 14 23:54:34.688825 containerd[1458]: time="2025-05-14T23:54:34.688762855Z" level=info msg="StopPodSandbox for \"fb6ec1ae0429d0a14ae769b7e263fe479f5e616c184463c2a1dcbd8adbdf81d4\" returns successfully" May 14 23:54:34.689281 containerd[1458]: time="2025-05-14T23:54:34.689249242Z" level=info msg="StopPodSandbox for \"7baa3219b9d67adfc4dc62350bf1c4169d2538001b9b6678280842b71c445e26\"" May 14 23:54:34.689345 containerd[1458]: time="2025-05-14T23:54:34.689328760Z" level=info msg="TearDown network for sandbox \"7baa3219b9d67adfc4dc62350bf1c4169d2538001b9b6678280842b71c445e26\" successfully" May 14 23:54:34.689370 containerd[1458]: time="2025-05-14T23:54:34.689344640Z" level=info msg="StopPodSandbox for \"7baa3219b9d67adfc4dc62350bf1c4169d2538001b9b6678280842b71c445e26\" returns successfully" May 14 23:54:34.690190 containerd[1458]: time="2025-05-14T23:54:34.690126979Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-948cf5cf6-mfgt6,Uid:dbd41afb-946c-4f95-a556-b712c9bfb043,Namespace:calico-apiserver,Attempt:2,}" May 14 23:54:34.691177 kubelet[2636]: I0514 23:54:34.690547 2636 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a7cff0ac920af035be360364ad9f3490cfa6439ba3727ac87d628b930665cdb4" May 14 23:54:34.691316 containerd[1458]: time="2025-05-14T23:54:34.691007635Z" level=info msg="StopPodSandbox for \"a7cff0ac920af035be360364ad9f3490cfa6439ba3727ac87d628b930665cdb4\"" May 14 23:54:34.702844 containerd[1458]: time="2025-05-14T23:54:34.702586447Z" level=info msg="Ensure that sandbox a7cff0ac920af035be360364ad9f3490cfa6439ba3727ac87d628b930665cdb4 in task-service has been cleanup successfully" May 14 23:54:34.703247 kubelet[2636]: I0514 23:54:34.703220 2636 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f4af3726f952fd76dc5fff58c58b1d7cce343f4d0d7c542795ad3cf3a2f8dd4b" May 14 23:54:34.706269 containerd[1458]: time="2025-05-14T23:54:34.706166472Z" level=info msg="TearDown network for sandbox \"a7cff0ac920af035be360364ad9f3490cfa6439ba3727ac87d628b930665cdb4\" successfully" May 14 23:54:34.706269 containerd[1458]: time="2025-05-14T23:54:34.706200391Z" level=info msg="StopPodSandbox for \"a7cff0ac920af035be360364ad9f3490cfa6439ba3727ac87d628b930665cdb4\" returns successfully" May 14 23:54:34.710438 containerd[1458]: time="2025-05-14T23:54:34.710382960Z" level=info msg="StopPodSandbox for \"411442b62549cd703144cf787d59ef4468de0c18b262472a2ba1f8b0af7ab32f\"" May 14 23:54:34.711967 containerd[1458]: time="2025-05-14T23:54:34.711475531Z" level=info msg="TearDown network for sandbox \"411442b62549cd703144cf787d59ef4468de0c18b262472a2ba1f8b0af7ab32f\" successfully" May 14 23:54:34.711967 containerd[1458]: time="2025-05-14T23:54:34.711512050Z" level=info msg="StopPodSandbox for \"411442b62549cd703144cf787d59ef4468de0c18b262472a2ba1f8b0af7ab32f\" returns successfully" May 14 23:54:34.711967 containerd[1458]: time="2025-05-14T23:54:34.711643766Z" level=info msg="StopPodSandbox for \"f4af3726f952fd76dc5fff58c58b1d7cce343f4d0d7c542795ad3cf3a2f8dd4b\"" May 14 23:54:34.712625 containerd[1458]: time="2025-05-14T23:54:34.712584541Z" level=info msg="Ensure that sandbox f4af3726f952fd76dc5fff58c58b1d7cce343f4d0d7c542795ad3cf3a2f8dd4b in task-service has been cleanup successfully" May 14 23:54:34.712872 containerd[1458]: time="2025-05-14T23:54:34.712851374Z" level=info msg="TearDown network for sandbox \"f4af3726f952fd76dc5fff58c58b1d7cce343f4d0d7c542795ad3cf3a2f8dd4b\" successfully" May 14 23:54:34.712872 containerd[1458]: time="2025-05-14T23:54:34.712870414Z" level=info msg="StopPodSandbox for \"f4af3726f952fd76dc5fff58c58b1d7cce343f4d0d7c542795ad3cf3a2f8dd4b\" returns successfully" May 14 23:54:34.722368 containerd[1458]: time="2025-05-14T23:54:34.722317722Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-m9489,Uid:5296ee56-ba99-4940-a157-166e05449d2c,Namespace:kube-system,Attempt:2,}" May 14 23:54:34.723259 containerd[1458]: time="2025-05-14T23:54:34.723219458Z" level=info msg="StopPodSandbox for \"03bf40f226371819ebf2af59750aec491c2dd2ab8a0480f6dccfa7c8e4859ac4\"" May 14 23:54:34.723458 containerd[1458]: time="2025-05-14T23:54:34.723432612Z" level=info msg="TearDown network for sandbox \"03bf40f226371819ebf2af59750aec491c2dd2ab8a0480f6dccfa7c8e4859ac4\" successfully" May 14 23:54:34.723513 containerd[1458]: time="2025-05-14T23:54:34.723450652Z" level=info msg="StopPodSandbox for \"03bf40f226371819ebf2af59750aec491c2dd2ab8a0480f6dccfa7c8e4859ac4\" returns successfully" May 14 23:54:34.727573 containerd[1458]: time="2025-05-14T23:54:34.727520024Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-948cf5cf6-nxpmq,Uid:e2130025-45f8-46c5-b0f8-d9cf000b93a0,Namespace:calico-apiserver,Attempt:2,}" May 14 23:54:34.784718 containerd[1458]: time="2025-05-14T23:54:34.784671182Z" level=error msg="Failed to destroy network for sandbox \"e7d06f4651ae11a7f4134e69e0346a46cf053229c492d7a7a8c2189baf27e9e7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:54:34.785232 containerd[1458]: time="2025-05-14T23:54:34.785115531Z" level=error msg="encountered an error cleaning up failed sandbox \"e7d06f4651ae11a7f4134e69e0346a46cf053229c492d7a7a8c2189baf27e9e7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:54:34.785288 containerd[1458]: time="2025-05-14T23:54:34.785267447Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-55976b5db9-6xq9g,Uid:8967758b-5095-4f47-a879-8fbab63daefc,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"e7d06f4651ae11a7f4134e69e0346a46cf053229c492d7a7a8c2189baf27e9e7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:54:34.785674 kubelet[2636]: E0514 23:54:34.785639 2636 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e7d06f4651ae11a7f4134e69e0346a46cf053229c492d7a7a8c2189baf27e9e7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:54:34.785823 kubelet[2636]: E0514 23:54:34.785699 2636 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e7d06f4651ae11a7f4134e69e0346a46cf053229c492d7a7a8c2189baf27e9e7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-55976b5db9-6xq9g" May 14 23:54:34.785823 kubelet[2636]: E0514 23:54:34.785719 2636 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e7d06f4651ae11a7f4134e69e0346a46cf053229c492d7a7a8c2189baf27e9e7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-55976b5db9-6xq9g" May 14 23:54:34.785823 kubelet[2636]: E0514 23:54:34.785759 2636 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-55976b5db9-6xq9g_calico-system(8967758b-5095-4f47-a879-8fbab63daefc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-55976b5db9-6xq9g_calico-system(8967758b-5095-4f47-a879-8fbab63daefc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e7d06f4651ae11a7f4134e69e0346a46cf053229c492d7a7a8c2189baf27e9e7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-55976b5db9-6xq9g" podUID="8967758b-5095-4f47-a879-8fbab63daefc" May 14 23:54:34.876703 systemd[1]: run-netns-cni\x2d04099e14\x2d3a79\x2dc0fd\x2d234c\x2dee57a97930c5.mount: Deactivated successfully. May 14 23:54:34.876808 systemd[1]: run-netns-cni\x2db724af5e\x2da378\x2df733\x2d4f0e\x2d1ff450c9b3f8.mount: Deactivated successfully. May 14 23:54:34.876861 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6069f7b6005f823a6c0f6bedf541af336a1e9360b68e369bfb47b5702685e544-shm.mount: Deactivated successfully. May 14 23:54:34.876911 systemd[1]: run-netns-cni\x2d3ae6737d\x2d846e\x2d29dd\x2d2b67\x2df6538d14c276.mount: Deactivated successfully. May 14 23:54:34.876956 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-fb6ec1ae0429d0a14ae769b7e263fe479f5e616c184463c2a1dcbd8adbdf81d4-shm.mount: Deactivated successfully. May 14 23:54:34.877005 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a7cff0ac920af035be360364ad9f3490cfa6439ba3727ac87d628b930665cdb4-shm.mount: Deactivated successfully. May 14 23:54:34.877056 systemd[1]: run-netns-cni\x2de35b175b\x2dfb4c\x2d8631\x2ddf30\x2d1e3c25845db4.mount: Deactivated successfully. May 14 23:54:34.877118 systemd[1]: run-netns-cni\x2d57f41f10\x2dccdd\x2d6003\x2d0afb\x2d828bc05ded75.mount: Deactivated successfully. May 14 23:54:34.877161 systemd[1]: run-netns-cni\x2dc05f6dd5\x2d3a0c\x2d8278\x2d2eab\x2d11e4982a74c2.mount: Deactivated successfully. May 14 23:54:34.888075 containerd[1458]: time="2025-05-14T23:54:34.888018552Z" level=error msg="Failed to destroy network for sandbox \"b44bd98b2f5b55a411d3b6eaa96fa5c4f7845d240ab9151a0da7a9f9ef7ca4de\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:54:34.890094 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b44bd98b2f5b55a411d3b6eaa96fa5c4f7845d240ab9151a0da7a9f9ef7ca4de-shm.mount: Deactivated successfully. May 14 23:54:34.890358 containerd[1458]: time="2025-05-14T23:54:34.890278571Z" level=error msg="encountered an error cleaning up failed sandbox \"b44bd98b2f5b55a411d3b6eaa96fa5c4f7845d240ab9151a0da7a9f9ef7ca4de\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:54:34.890482 containerd[1458]: time="2025-05-14T23:54:34.890461407Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-948cf5cf6-mfgt6,Uid:dbd41afb-946c-4f95-a556-b712c9bfb043,Namespace:calico-apiserver,Attempt:2,} failed, error" error="failed to setup network for sandbox \"b44bd98b2f5b55a411d3b6eaa96fa5c4f7845d240ab9151a0da7a9f9ef7ca4de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:54:34.891756 kubelet[2636]: E0514 23:54:34.891712 2636 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b44bd98b2f5b55a411d3b6eaa96fa5c4f7845d240ab9151a0da7a9f9ef7ca4de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:54:34.891852 kubelet[2636]: E0514 23:54:34.891773 2636 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b44bd98b2f5b55a411d3b6eaa96fa5c4f7845d240ab9151a0da7a9f9ef7ca4de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-948cf5cf6-mfgt6" May 14 23:54:34.891852 kubelet[2636]: E0514 23:54:34.891794 2636 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b44bd98b2f5b55a411d3b6eaa96fa5c4f7845d240ab9151a0da7a9f9ef7ca4de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-948cf5cf6-mfgt6" May 14 23:54:34.891852 kubelet[2636]: E0514 23:54:34.891829 2636 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-948cf5cf6-mfgt6_calico-apiserver(dbd41afb-946c-4f95-a556-b712c9bfb043)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-948cf5cf6-mfgt6_calico-apiserver(dbd41afb-946c-4f95-a556-b712c9bfb043)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b44bd98b2f5b55a411d3b6eaa96fa5c4f7845d240ab9151a0da7a9f9ef7ca4de\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-948cf5cf6-mfgt6" podUID="dbd41afb-946c-4f95-a556-b712c9bfb043" May 14 23:54:34.891963 containerd[1458]: time="2025-05-14T23:54:34.891895648Z" level=error msg="Failed to destroy network for sandbox \"cc6f049b3aae03c0fec0542f838c43b74900d5312469ce9addfb3b82fd22b201\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:54:34.894011 containerd[1458]: time="2025-05-14T23:54:34.893798878Z" level=error msg="encountered an error cleaning up failed sandbox \"cc6f049b3aae03c0fec0542f838c43b74900d5312469ce9addfb3b82fd22b201\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:54:34.894011 containerd[1458]: time="2025-05-14T23:54:34.893862156Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-b47sh,Uid:f03276de-018a-4c35-8ec5-8ee9060e84e4,Namespace:kube-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"cc6f049b3aae03c0fec0542f838c43b74900d5312469ce9addfb3b82fd22b201\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:54:34.894151 kubelet[2636]: E0514 23:54:34.894111 2636 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cc6f049b3aae03c0fec0542f838c43b74900d5312469ce9addfb3b82fd22b201\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:54:34.894219 kubelet[2636]: E0514 23:54:34.894162 2636 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cc6f049b3aae03c0fec0542f838c43b74900d5312469ce9addfb3b82fd22b201\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-b47sh" May 14 23:54:34.894219 kubelet[2636]: E0514 23:54:34.894206 2636 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cc6f049b3aae03c0fec0542f838c43b74900d5312469ce9addfb3b82fd22b201\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-b47sh" May 14 23:54:34.894272 kubelet[2636]: E0514 23:54:34.894239 2636 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-b47sh_kube-system(f03276de-018a-4c35-8ec5-8ee9060e84e4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-b47sh_kube-system(f03276de-018a-4c35-8ec5-8ee9060e84e4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cc6f049b3aae03c0fec0542f838c43b74900d5312469ce9addfb3b82fd22b201\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-b47sh" podUID="f03276de-018a-4c35-8ec5-8ee9060e84e4" May 14 23:54:34.895902 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-cc6f049b3aae03c0fec0542f838c43b74900d5312469ce9addfb3b82fd22b201-shm.mount: Deactivated successfully. May 14 23:54:34.901388 containerd[1458]: time="2025-05-14T23:54:34.901237640Z" level=error msg="Failed to destroy network for sandbox \"47a87353b700cc5924d1a24d563b41468d8b4cd51a69dd30b4c6101bd99384f1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:54:34.904133 containerd[1458]: time="2025-05-14T23:54:34.904084084Z" level=error msg="encountered an error cleaning up failed sandbox \"47a87353b700cc5924d1a24d563b41468d8b4cd51a69dd30b4c6101bd99384f1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:54:34.904206 containerd[1458]: time="2025-05-14T23:54:34.904159082Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-m9489,Uid:5296ee56-ba99-4940-a157-166e05449d2c,Namespace:kube-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"47a87353b700cc5924d1a24d563b41468d8b4cd51a69dd30b4c6101bd99384f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:54:34.904871 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-47a87353b700cc5924d1a24d563b41468d8b4cd51a69dd30b4c6101bd99384f1-shm.mount: Deactivated successfully. May 14 23:54:34.905793 kubelet[2636]: E0514 23:54:34.905694 2636 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"47a87353b700cc5924d1a24d563b41468d8b4cd51a69dd30b4c6101bd99384f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:54:34.905793 kubelet[2636]: E0514 23:54:34.905755 2636 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"47a87353b700cc5924d1a24d563b41468d8b4cd51a69dd30b4c6101bd99384f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-m9489" May 14 23:54:34.905793 kubelet[2636]: E0514 23:54:34.905773 2636 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"47a87353b700cc5924d1a24d563b41468d8b4cd51a69dd30b4c6101bd99384f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-m9489" May 14 23:54:34.906086 kubelet[2636]: E0514 23:54:34.905814 2636 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-m9489_kube-system(5296ee56-ba99-4940-a157-166e05449d2c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-m9489_kube-system(5296ee56-ba99-4940-a157-166e05449d2c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"47a87353b700cc5924d1a24d563b41468d8b4cd51a69dd30b4c6101bd99384f1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-m9489" podUID="5296ee56-ba99-4940-a157-166e05449d2c" May 14 23:54:34.908723 containerd[1458]: time="2025-05-14T23:54:34.908602284Z" level=error msg="Failed to destroy network for sandbox \"23c766402a447913dea2632e3ae9916eb4a8b85d8ab4676f6d5f899207c95a30\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:54:34.909256 containerd[1458]: time="2025-05-14T23:54:34.909222027Z" level=error msg="encountered an error cleaning up failed sandbox \"23c766402a447913dea2632e3ae9916eb4a8b85d8ab4676f6d5f899207c95a30\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:54:34.910156 containerd[1458]: time="2025-05-14T23:54:34.910031526Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xq692,Uid:78b8ecbd-ebea-41ed-a71c-8ddd96e45e21,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"23c766402a447913dea2632e3ae9916eb4a8b85d8ab4676f6d5f899207c95a30\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:54:34.910348 kubelet[2636]: E0514 23:54:34.910281 2636 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"23c766402a447913dea2632e3ae9916eb4a8b85d8ab4676f6d5f899207c95a30\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:54:34.910348 kubelet[2636]: E0514 23:54:34.910331 2636 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"23c766402a447913dea2632e3ae9916eb4a8b85d8ab4676f6d5f899207c95a30\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-xq692" May 14 23:54:34.910473 kubelet[2636]: E0514 23:54:34.910357 2636 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"23c766402a447913dea2632e3ae9916eb4a8b85d8ab4676f6d5f899207c95a30\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-xq692" May 14 23:54:34.910572 kubelet[2636]: E0514 23:54:34.910500 2636 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-xq692_calico-system(78b8ecbd-ebea-41ed-a71c-8ddd96e45e21)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-xq692_calico-system(78b8ecbd-ebea-41ed-a71c-8ddd96e45e21)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"23c766402a447913dea2632e3ae9916eb4a8b85d8ab4676f6d5f899207c95a30\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-xq692" podUID="78b8ecbd-ebea-41ed-a71c-8ddd96e45e21" May 14 23:54:34.911959 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-23c766402a447913dea2632e3ae9916eb4a8b85d8ab4676f6d5f899207c95a30-shm.mount: Deactivated successfully. May 14 23:54:34.920653 containerd[1458]: time="2025-05-14T23:54:34.920609124Z" level=error msg="Failed to destroy network for sandbox \"2f38b5963a74e4a9a11faddf5c19d5dc8b4278f0741f1db7e46084dff223aed6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:54:34.921003 containerd[1458]: time="2025-05-14T23:54:34.920974074Z" level=error msg="encountered an error cleaning up failed sandbox \"2f38b5963a74e4a9a11faddf5c19d5dc8b4278f0741f1db7e46084dff223aed6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:54:34.921434 containerd[1458]: time="2025-05-14T23:54:34.921044593Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-948cf5cf6-nxpmq,Uid:e2130025-45f8-46c5-b0f8-d9cf000b93a0,Namespace:calico-apiserver,Attempt:2,} failed, error" error="failed to setup network for sandbox \"2f38b5963a74e4a9a11faddf5c19d5dc8b4278f0741f1db7e46084dff223aed6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:54:34.921507 kubelet[2636]: E0514 23:54:34.921266 2636 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2f38b5963a74e4a9a11faddf5c19d5dc8b4278f0741f1db7e46084dff223aed6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:54:34.921507 kubelet[2636]: E0514 23:54:34.921317 2636 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2f38b5963a74e4a9a11faddf5c19d5dc8b4278f0741f1db7e46084dff223aed6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-948cf5cf6-nxpmq" May 14 23:54:34.921507 kubelet[2636]: E0514 23:54:34.921347 2636 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2f38b5963a74e4a9a11faddf5c19d5dc8b4278f0741f1db7e46084dff223aed6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-948cf5cf6-nxpmq" May 14 23:54:34.921609 kubelet[2636]: E0514 23:54:34.921386 2636 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-948cf5cf6-nxpmq_calico-apiserver(e2130025-45f8-46c5-b0f8-d9cf000b93a0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-948cf5cf6-nxpmq_calico-apiserver(e2130025-45f8-46c5-b0f8-d9cf000b93a0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2f38b5963a74e4a9a11faddf5c19d5dc8b4278f0741f1db7e46084dff223aed6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-948cf5cf6-nxpmq" podUID="e2130025-45f8-46c5-b0f8-d9cf000b93a0" May 14 23:54:35.707686 kubelet[2636]: I0514 23:54:35.707656 2636 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2f38b5963a74e4a9a11faddf5c19d5dc8b4278f0741f1db7e46084dff223aed6" May 14 23:54:35.708213 containerd[1458]: time="2025-05-14T23:54:35.708179171Z" level=info msg="StopPodSandbox for \"2f38b5963a74e4a9a11faddf5c19d5dc8b4278f0741f1db7e46084dff223aed6\"" May 14 23:54:35.709905 kubelet[2636]: I0514 23:54:35.709856 2636 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e7d06f4651ae11a7f4134e69e0346a46cf053229c492d7a7a8c2189baf27e9e7" May 14 23:54:35.710635 containerd[1458]: time="2025-05-14T23:54:35.710557230Z" level=info msg="StopPodSandbox for \"e7d06f4651ae11a7f4134e69e0346a46cf053229c492d7a7a8c2189baf27e9e7\"" May 14 23:54:35.711131 containerd[1458]: time="2025-05-14T23:54:35.710921660Z" level=info msg="Ensure that sandbox e7d06f4651ae11a7f4134e69e0346a46cf053229c492d7a7a8c2189baf27e9e7 in task-service has been cleanup successfully" May 14 23:54:35.711718 containerd[1458]: time="2025-05-14T23:54:35.711188894Z" level=info msg="Ensure that sandbox 2f38b5963a74e4a9a11faddf5c19d5dc8b4278f0741f1db7e46084dff223aed6 in task-service has been cleanup successfully" May 14 23:54:35.712292 containerd[1458]: time="2025-05-14T23:54:35.712240947Z" level=info msg="TearDown network for sandbox \"2f38b5963a74e4a9a11faddf5c19d5dc8b4278f0741f1db7e46084dff223aed6\" successfully" May 14 23:54:35.712529 containerd[1458]: time="2025-05-14T23:54:35.712424982Z" level=info msg="StopPodSandbox for \"2f38b5963a74e4a9a11faddf5c19d5dc8b4278f0741f1db7e46084dff223aed6\" returns successfully" May 14 23:54:35.713045 containerd[1458]: time="2025-05-14T23:54:35.712783573Z" level=info msg="TearDown network for sandbox \"e7d06f4651ae11a7f4134e69e0346a46cf053229c492d7a7a8c2189baf27e9e7\" successfully" May 14 23:54:35.713045 containerd[1458]: time="2025-05-14T23:54:35.712804772Z" level=info msg="StopPodSandbox for \"e7d06f4651ae11a7f4134e69e0346a46cf053229c492d7a7a8c2189baf27e9e7\" returns successfully" May 14 23:54:35.713168 kubelet[2636]: I0514 23:54:35.712865 2636 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cc6f049b3aae03c0fec0542f838c43b74900d5312469ce9addfb3b82fd22b201" May 14 23:54:35.713503 containerd[1458]: time="2025-05-14T23:54:35.713405477Z" level=info msg="StopPodSandbox for \"f4af3726f952fd76dc5fff58c58b1d7cce343f4d0d7c542795ad3cf3a2f8dd4b\"" May 14 23:54:35.713503 containerd[1458]: time="2025-05-14T23:54:35.713493994Z" level=info msg="TearDown network for sandbox \"f4af3726f952fd76dc5fff58c58b1d7cce343f4d0d7c542795ad3cf3a2f8dd4b\" successfully" May 14 23:54:35.713503 containerd[1458]: time="2025-05-14T23:54:35.713505074Z" level=info msg="StopPodSandbox for \"f4af3726f952fd76dc5fff58c58b1d7cce343f4d0d7c542795ad3cf3a2f8dd4b\" returns successfully" May 14 23:54:35.713974 containerd[1458]: time="2025-05-14T23:54:35.713847825Z" level=info msg="StopPodSandbox for \"9ea9ec64e4bb1d03aff564342d1365b5c979d1b4c871a06f698e17f49c805e09\"" May 14 23:54:35.714338 containerd[1458]: time="2025-05-14T23:54:35.714147418Z" level=info msg="StopPodSandbox for \"cc6f049b3aae03c0fec0542f838c43b74900d5312469ce9addfb3b82fd22b201\"" May 14 23:54:35.714578 containerd[1458]: time="2025-05-14T23:54:35.714550807Z" level=info msg="Ensure that sandbox cc6f049b3aae03c0fec0542f838c43b74900d5312469ce9addfb3b82fd22b201 in task-service has been cleanup successfully" May 14 23:54:35.714797 containerd[1458]: time="2025-05-14T23:54:35.714252975Z" level=info msg="TearDown network for sandbox \"9ea9ec64e4bb1d03aff564342d1365b5c979d1b4c871a06f698e17f49c805e09\" successfully" May 14 23:54:35.714797 containerd[1458]: time="2025-05-14T23:54:35.714756322Z" level=info msg="StopPodSandbox for \"9ea9ec64e4bb1d03aff564342d1365b5c979d1b4c871a06f698e17f49c805e09\" returns successfully" May 14 23:54:35.714871 containerd[1458]: time="2025-05-14T23:54:35.714301054Z" level=info msg="StopPodSandbox for \"03bf40f226371819ebf2af59750aec491c2dd2ab8a0480f6dccfa7c8e4859ac4\"" May 14 23:54:35.714895 containerd[1458]: time="2025-05-14T23:54:35.714872199Z" level=info msg="TearDown network for sandbox \"03bf40f226371819ebf2af59750aec491c2dd2ab8a0480f6dccfa7c8e4859ac4\" successfully" May 14 23:54:35.714895 containerd[1458]: time="2025-05-14T23:54:35.714880679Z" level=info msg="StopPodSandbox for \"03bf40f226371819ebf2af59750aec491c2dd2ab8a0480f6dccfa7c8e4859ac4\" returns successfully" May 14 23:54:35.715199 containerd[1458]: time="2025-05-14T23:54:35.715145432Z" level=info msg="StopPodSandbox for \"33300e42d9dd9e82c101350186c488335e8fd17e979f85ac1b8f8c792dccbae8\"" May 14 23:54:35.715620 containerd[1458]: time="2025-05-14T23:54:35.715545982Z" level=info msg="TearDown network for sandbox \"cc6f049b3aae03c0fec0542f838c43b74900d5312469ce9addfb3b82fd22b201\" successfully" May 14 23:54:35.715700 containerd[1458]: time="2025-05-14T23:54:35.715669778Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-948cf5cf6-nxpmq,Uid:e2130025-45f8-46c5-b0f8-d9cf000b93a0,Namespace:calico-apiserver,Attempt:3,}" May 14 23:54:35.715866 containerd[1458]: time="2025-05-14T23:54:35.715846534Z" level=info msg="TearDown network for sandbox \"33300e42d9dd9e82c101350186c488335e8fd17e979f85ac1b8f8c792dccbae8\" successfully" May 14 23:54:35.715866 containerd[1458]: time="2025-05-14T23:54:35.715863773Z" level=info msg="StopPodSandbox for \"33300e42d9dd9e82c101350186c488335e8fd17e979f85ac1b8f8c792dccbae8\" returns successfully" May 14 23:54:35.716320 containerd[1458]: time="2025-05-14T23:54:35.716195805Z" level=info msg="StopPodSandbox for \"cc6f049b3aae03c0fec0542f838c43b74900d5312469ce9addfb3b82fd22b201\" returns successfully" May 14 23:54:35.716320 containerd[1458]: time="2025-05-14T23:54:35.716221164Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-55976b5db9-6xq9g,Uid:8967758b-5095-4f47-a879-8fbab63daefc,Namespace:calico-system,Attempt:3,}" May 14 23:54:35.716944 kubelet[2636]: I0514 23:54:35.716922 2636 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="23c766402a447913dea2632e3ae9916eb4a8b85d8ab4676f6d5f899207c95a30" May 14 23:54:35.717321 containerd[1458]: time="2025-05-14T23:54:35.717254378Z" level=info msg="StopPodSandbox for \"4e30068388552149fd1dd8a8bcbdeae3234ce1cf01252084cb9fd7e718ae0e43\"" May 14 23:54:35.717591 containerd[1458]: time="2025-05-14T23:54:35.717493092Z" level=info msg="TearDown network for sandbox \"4e30068388552149fd1dd8a8bcbdeae3234ce1cf01252084cb9fd7e718ae0e43\" successfully" May 14 23:54:35.717591 containerd[1458]: time="2025-05-14T23:54:35.717525891Z" level=info msg="StopPodSandbox for \"4e30068388552149fd1dd8a8bcbdeae3234ce1cf01252084cb9fd7e718ae0e43\" returns successfully" May 14 23:54:35.718388 containerd[1458]: time="2025-05-14T23:54:35.718281231Z" level=info msg="StopPodSandbox for \"23c766402a447913dea2632e3ae9916eb4a8b85d8ab4676f6d5f899207c95a30\"" May 14 23:54:35.718388 containerd[1458]: time="2025-05-14T23:54:35.718354629Z" level=info msg="StopPodSandbox for \"2acec14f0cbe05c7aa548bdcd1bc51128c3959fd460322326d86d668c75644a6\"" May 14 23:54:35.718490 containerd[1458]: time="2025-05-14T23:54:35.718427388Z" level=info msg="TearDown network for sandbox \"2acec14f0cbe05c7aa548bdcd1bc51128c3959fd460322326d86d668c75644a6\" successfully" May 14 23:54:35.718490 containerd[1458]: time="2025-05-14T23:54:35.718437747Z" level=info msg="StopPodSandbox for \"2acec14f0cbe05c7aa548bdcd1bc51128c3959fd460322326d86d668c75644a6\" returns successfully" May 14 23:54:35.718854 containerd[1458]: time="2025-05-14T23:54:35.718811258Z" level=info msg="Ensure that sandbox 23c766402a447913dea2632e3ae9916eb4a8b85d8ab4676f6d5f899207c95a30 in task-service has been cleanup successfully" May 14 23:54:35.719058 containerd[1458]: time="2025-05-14T23:54:35.719005093Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-b47sh,Uid:f03276de-018a-4c35-8ec5-8ee9060e84e4,Namespace:kube-system,Attempt:3,}" May 14 23:54:35.719423 containerd[1458]: time="2025-05-14T23:54:35.719301805Z" level=info msg="TearDown network for sandbox \"23c766402a447913dea2632e3ae9916eb4a8b85d8ab4676f6d5f899207c95a30\" successfully" May 14 23:54:35.719423 containerd[1458]: time="2025-05-14T23:54:35.719327524Z" level=info msg="StopPodSandbox for \"23c766402a447913dea2632e3ae9916eb4a8b85d8ab4676f6d5f899207c95a30\" returns successfully" May 14 23:54:35.719977 containerd[1458]: time="2025-05-14T23:54:35.719944909Z" level=info msg="StopPodSandbox for \"6069f7b6005f823a6c0f6bedf541af336a1e9360b68e369bfb47b5702685e544\"" May 14 23:54:35.720172 containerd[1458]: time="2025-05-14T23:54:35.720034786Z" level=info msg="TearDown network for sandbox \"6069f7b6005f823a6c0f6bedf541af336a1e9360b68e369bfb47b5702685e544\" successfully" May 14 23:54:35.720216 containerd[1458]: time="2025-05-14T23:54:35.720172463Z" level=info msg="StopPodSandbox for \"6069f7b6005f823a6c0f6bedf541af336a1e9360b68e369bfb47b5702685e544\" returns successfully" May 14 23:54:35.720771 kubelet[2636]: I0514 23:54:35.720523 2636 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b44bd98b2f5b55a411d3b6eaa96fa5c4f7845d240ab9151a0da7a9f9ef7ca4de" May 14 23:54:35.720853 containerd[1458]: time="2025-05-14T23:54:35.720677370Z" level=info msg="StopPodSandbox for \"87ca9c151aa61190fc4680342f5a4b03631cdb8646efa58c4ac2b140adfb783c\"" May 14 23:54:35.721095 containerd[1458]: time="2025-05-14T23:54:35.720993242Z" level=info msg="TearDown network for sandbox \"87ca9c151aa61190fc4680342f5a4b03631cdb8646efa58c4ac2b140adfb783c\" successfully" May 14 23:54:35.721095 containerd[1458]: time="2025-05-14T23:54:35.721024481Z" level=info msg="StopPodSandbox for \"87ca9c151aa61190fc4680342f5a4b03631cdb8646efa58c4ac2b140adfb783c\" returns successfully" May 14 23:54:35.721657 containerd[1458]: time="2025-05-14T23:54:35.721614946Z" level=info msg="StopPodSandbox for \"b44bd98b2f5b55a411d3b6eaa96fa5c4f7845d240ab9151a0da7a9f9ef7ca4de\"" May 14 23:54:35.722221 containerd[1458]: time="2025-05-14T23:54:35.721760782Z" level=info msg="Ensure that sandbox b44bd98b2f5b55a411d3b6eaa96fa5c4f7845d240ab9151a0da7a9f9ef7ca4de in task-service has been cleanup successfully" May 14 23:54:35.722221 containerd[1458]: time="2025-05-14T23:54:35.722114013Z" level=info msg="TearDown network for sandbox \"b44bd98b2f5b55a411d3b6eaa96fa5c4f7845d240ab9151a0da7a9f9ef7ca4de\" successfully" May 14 23:54:35.722221 containerd[1458]: time="2025-05-14T23:54:35.722131372Z" level=info msg="StopPodSandbox for \"b44bd98b2f5b55a411d3b6eaa96fa5c4f7845d240ab9151a0da7a9f9ef7ca4de\" returns successfully" May 14 23:54:35.723763 containerd[1458]: time="2025-05-14T23:54:35.722390726Z" level=info msg="StopPodSandbox for \"fb6ec1ae0429d0a14ae769b7e263fe479f5e616c184463c2a1dcbd8adbdf81d4\"" May 14 23:54:35.723763 containerd[1458]: time="2025-05-14T23:54:35.722394646Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xq692,Uid:78b8ecbd-ebea-41ed-a71c-8ddd96e45e21,Namespace:calico-system,Attempt:3,}" May 14 23:54:35.723763 containerd[1458]: time="2025-05-14T23:54:35.722740557Z" level=info msg="TearDown network for sandbox \"fb6ec1ae0429d0a14ae769b7e263fe479f5e616c184463c2a1dcbd8adbdf81d4\" successfully" May 14 23:54:35.723763 containerd[1458]: time="2025-05-14T23:54:35.722756316Z" level=info msg="StopPodSandbox for \"fb6ec1ae0429d0a14ae769b7e263fe479f5e616c184463c2a1dcbd8adbdf81d4\" returns successfully" May 14 23:54:35.723763 containerd[1458]: time="2025-05-14T23:54:35.723145506Z" level=info msg="StopPodSandbox for \"47a87353b700cc5924d1a24d563b41468d8b4cd51a69dd30b4c6101bd99384f1\"" May 14 23:54:35.723763 containerd[1458]: time="2025-05-14T23:54:35.723287023Z" level=info msg="Ensure that sandbox 47a87353b700cc5924d1a24d563b41468d8b4cd51a69dd30b4c6101bd99384f1 in task-service has been cleanup successfully" May 14 23:54:35.723763 containerd[1458]: time="2025-05-14T23:54:35.723438859Z" level=info msg="TearDown network for sandbox \"47a87353b700cc5924d1a24d563b41468d8b4cd51a69dd30b4c6101bd99384f1\" successfully" May 14 23:54:35.723763 containerd[1458]: time="2025-05-14T23:54:35.723453338Z" level=info msg="StopPodSandbox for \"47a87353b700cc5924d1a24d563b41468d8b4cd51a69dd30b4c6101bd99384f1\" returns successfully" May 14 23:54:35.723763 containerd[1458]: time="2025-05-14T23:54:35.723622414Z" level=info msg="StopPodSandbox for \"7baa3219b9d67adfc4dc62350bf1c4169d2538001b9b6678280842b71c445e26\"" May 14 23:54:35.723763 containerd[1458]: time="2025-05-14T23:54:35.723685092Z" level=info msg="TearDown network for sandbox \"7baa3219b9d67adfc4dc62350bf1c4169d2538001b9b6678280842b71c445e26\" successfully" May 14 23:54:35.723763 containerd[1458]: time="2025-05-14T23:54:35.723695772Z" level=info msg="StopPodSandbox for \"7baa3219b9d67adfc4dc62350bf1c4169d2538001b9b6678280842b71c445e26\" returns successfully" May 14 23:54:35.724028 kubelet[2636]: I0514 23:54:35.722739 2636 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="47a87353b700cc5924d1a24d563b41468d8b4cd51a69dd30b4c6101bd99384f1" May 14 23:54:35.724072 containerd[1458]: time="2025-05-14T23:54:35.723830809Z" level=info msg="StopPodSandbox for \"a7cff0ac920af035be360364ad9f3490cfa6439ba3727ac87d628b930665cdb4\"" May 14 23:54:35.724072 containerd[1458]: time="2025-05-14T23:54:35.723894407Z" level=info msg="TearDown network for sandbox \"a7cff0ac920af035be360364ad9f3490cfa6439ba3727ac87d628b930665cdb4\" successfully" May 14 23:54:35.724072 containerd[1458]: time="2025-05-14T23:54:35.723904527Z" level=info msg="StopPodSandbox for \"a7cff0ac920af035be360364ad9f3490cfa6439ba3727ac87d628b930665cdb4\" returns successfully" May 14 23:54:35.724072 containerd[1458]: time="2025-05-14T23:54:35.724019164Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-948cf5cf6-mfgt6,Uid:dbd41afb-946c-4f95-a556-b712c9bfb043,Namespace:calico-apiserver,Attempt:3,}" May 14 23:54:35.724493 containerd[1458]: time="2025-05-14T23:54:35.724442913Z" level=info msg="StopPodSandbox for \"411442b62549cd703144cf787d59ef4468de0c18b262472a2ba1f8b0af7ab32f\"" May 14 23:54:35.724710 containerd[1458]: time="2025-05-14T23:54:35.724678827Z" level=info msg="TearDown network for sandbox \"411442b62549cd703144cf787d59ef4468de0c18b262472a2ba1f8b0af7ab32f\" successfully" May 14 23:54:35.724775 containerd[1458]: time="2025-05-14T23:54:35.724760145Z" level=info msg="StopPodSandbox for \"411442b62549cd703144cf787d59ef4468de0c18b262472a2ba1f8b0af7ab32f\" returns successfully" May 14 23:54:35.725299 containerd[1458]: time="2025-05-14T23:54:35.725264012Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-m9489,Uid:5296ee56-ba99-4940-a157-166e05449d2c,Namespace:kube-system,Attempt:3,}" May 14 23:54:35.866939 systemd[1]: run-netns-cni\x2da655ced4\x2dbe83\x2d1587\x2dd633\x2d37cf709543fd.mount: Deactivated successfully. May 14 23:54:35.867051 systemd[1]: run-netns-cni\x2d4f9f887b\x2d446d\x2ddc97\x2d3b22\x2d3a09bd4aef67.mount: Deactivated successfully. May 14 23:54:35.867104 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2f38b5963a74e4a9a11faddf5c19d5dc8b4278f0741f1db7e46084dff223aed6-shm.mount: Deactivated successfully. May 14 23:54:35.867156 systemd[1]: run-netns-cni\x2d21b6763d\x2dbb34\x2d9ee3\x2d8be2\x2db5b2a1d3fe31.mount: Deactivated successfully. May 14 23:54:35.867203 systemd[1]: run-netns-cni\x2d4ab9fc8a\x2d9558\x2d1a37\x2d3d2f\x2d2e0fbff27edb.mount: Deactivated successfully. May 14 23:54:35.867243 systemd[1]: run-netns-cni\x2d5e8051d7\x2d5bb5\x2dfd66\x2da613\x2d00eec4c52dc8.mount: Deactivated successfully. May 14 23:54:35.867286 systemd[1]: run-netns-cni\x2d07a285bc\x2dd9bb\x2d4e65\x2df919\x2d90f8fafa7da7.mount: Deactivated successfully. May 14 23:54:36.142168 containerd[1458]: time="2025-05-14T23:54:36.141811669Z" level=error msg="Failed to destroy network for sandbox \"d4994a3fe97aedde1b1f50d461d41e37685e1713cc1098d2aa9daa494cf0e469\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:54:36.142622 containerd[1458]: time="2025-05-14T23:54:36.142394174Z" level=error msg="encountered an error cleaning up failed sandbox \"d4994a3fe97aedde1b1f50d461d41e37685e1713cc1098d2aa9daa494cf0e469\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:54:36.143744 containerd[1458]: time="2025-05-14T23:54:36.143699342Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-b47sh,Uid:f03276de-018a-4c35-8ec5-8ee9060e84e4,Namespace:kube-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"d4994a3fe97aedde1b1f50d461d41e37685e1713cc1098d2aa9daa494cf0e469\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:54:36.144063 kubelet[2636]: E0514 23:54:36.144016 2636 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d4994a3fe97aedde1b1f50d461d41e37685e1713cc1098d2aa9daa494cf0e469\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:54:36.144130 kubelet[2636]: E0514 23:54:36.144088 2636 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d4994a3fe97aedde1b1f50d461d41e37685e1713cc1098d2aa9daa494cf0e469\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-b47sh" May 14 23:54:36.144130 kubelet[2636]: E0514 23:54:36.144112 2636 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d4994a3fe97aedde1b1f50d461d41e37685e1713cc1098d2aa9daa494cf0e469\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-b47sh" May 14 23:54:36.145233 kubelet[2636]: E0514 23:54:36.144156 2636 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-b47sh_kube-system(f03276de-018a-4c35-8ec5-8ee9060e84e4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-b47sh_kube-system(f03276de-018a-4c35-8ec5-8ee9060e84e4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d4994a3fe97aedde1b1f50d461d41e37685e1713cc1098d2aa9daa494cf0e469\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-b47sh" podUID="f03276de-018a-4c35-8ec5-8ee9060e84e4" May 14 23:54:36.159762 containerd[1458]: time="2025-05-14T23:54:36.159714464Z" level=error msg="Failed to destroy network for sandbox \"9eecfec659f3de06d2fe478adbb1bd614221f45bc96ed87dda589f41a8750028\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:54:36.161571 containerd[1458]: time="2025-05-14T23:54:36.160994152Z" level=error msg="encountered an error cleaning up failed sandbox \"9eecfec659f3de06d2fe478adbb1bd614221f45bc96ed87dda589f41a8750028\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:54:36.161571 containerd[1458]: time="2025-05-14T23:54:36.161081150Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-m9489,Uid:5296ee56-ba99-4940-a157-166e05449d2c,Namespace:kube-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"9eecfec659f3de06d2fe478adbb1bd614221f45bc96ed87dda589f41a8750028\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:54:36.161689 kubelet[2636]: E0514 23:54:36.161295 2636 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9eecfec659f3de06d2fe478adbb1bd614221f45bc96ed87dda589f41a8750028\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:54:36.161689 kubelet[2636]: E0514 23:54:36.161346 2636 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9eecfec659f3de06d2fe478adbb1bd614221f45bc96ed87dda589f41a8750028\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-m9489" May 14 23:54:36.161689 kubelet[2636]: E0514 23:54:36.161365 2636 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9eecfec659f3de06d2fe478adbb1bd614221f45bc96ed87dda589f41a8750028\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-m9489" May 14 23:54:36.161839 kubelet[2636]: E0514 23:54:36.161407 2636 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-m9489_kube-system(5296ee56-ba99-4940-a157-166e05449d2c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-m9489_kube-system(5296ee56-ba99-4940-a157-166e05449d2c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9eecfec659f3de06d2fe478adbb1bd614221f45bc96ed87dda589f41a8750028\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-m9489" podUID="5296ee56-ba99-4940-a157-166e05449d2c" May 14 23:54:36.166711 containerd[1458]: time="2025-05-14T23:54:36.166678091Z" level=error msg="Failed to destroy network for sandbox \"25766839fa42ff8f6330755ddff16513a3d473d5b48564d30f0a8e5f7f8abe9d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:54:36.167131 containerd[1458]: time="2025-05-14T23:54:36.167102400Z" level=error msg="encountered an error cleaning up failed sandbox \"25766839fa42ff8f6330755ddff16513a3d473d5b48564d30f0a8e5f7f8abe9d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:54:36.167259 containerd[1458]: time="2025-05-14T23:54:36.167239077Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xq692,Uid:78b8ecbd-ebea-41ed-a71c-8ddd96e45e21,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"25766839fa42ff8f6330755ddff16513a3d473d5b48564d30f0a8e5f7f8abe9d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:54:36.167487 kubelet[2636]: E0514 23:54:36.167443 2636 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"25766839fa42ff8f6330755ddff16513a3d473d5b48564d30f0a8e5f7f8abe9d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:54:36.167656 kubelet[2636]: E0514 23:54:36.167495 2636 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"25766839fa42ff8f6330755ddff16513a3d473d5b48564d30f0a8e5f7f8abe9d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-xq692" May 14 23:54:36.167656 kubelet[2636]: E0514 23:54:36.167513 2636 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"25766839fa42ff8f6330755ddff16513a3d473d5b48564d30f0a8e5f7f8abe9d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-xq692" May 14 23:54:36.167656 kubelet[2636]: E0514 23:54:36.167579 2636 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-xq692_calico-system(78b8ecbd-ebea-41ed-a71c-8ddd96e45e21)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-xq692_calico-system(78b8ecbd-ebea-41ed-a71c-8ddd96e45e21)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"25766839fa42ff8f6330755ddff16513a3d473d5b48564d30f0a8e5f7f8abe9d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-xq692" podUID="78b8ecbd-ebea-41ed-a71c-8ddd96e45e21" May 14 23:54:36.169756 containerd[1458]: time="2025-05-14T23:54:36.169487541Z" level=error msg="Failed to destroy network for sandbox \"2a18df2ef627ce72b669e8188106446d588a1b880c17928bb92e8d6b23f8e9a9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:54:36.170237 containerd[1458]: time="2025-05-14T23:54:36.170205283Z" level=error msg="encountered an error cleaning up failed sandbox \"2a18df2ef627ce72b669e8188106446d588a1b880c17928bb92e8d6b23f8e9a9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:54:36.170294 containerd[1458]: time="2025-05-14T23:54:36.170269002Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-948cf5cf6-mfgt6,Uid:dbd41afb-946c-4f95-a556-b712c9bfb043,Namespace:calico-apiserver,Attempt:3,} failed, error" error="failed to setup network for sandbox \"2a18df2ef627ce72b669e8188106446d588a1b880c17928bb92e8d6b23f8e9a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:54:36.170471 kubelet[2636]: E0514 23:54:36.170436 2636 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2a18df2ef627ce72b669e8188106446d588a1b880c17928bb92e8d6b23f8e9a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:54:36.170507 kubelet[2636]: E0514 23:54:36.170489 2636 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2a18df2ef627ce72b669e8188106446d588a1b880c17928bb92e8d6b23f8e9a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-948cf5cf6-mfgt6" May 14 23:54:36.170530 kubelet[2636]: E0514 23:54:36.170506 2636 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2a18df2ef627ce72b669e8188106446d588a1b880c17928bb92e8d6b23f8e9a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-948cf5cf6-mfgt6" May 14 23:54:36.170625 kubelet[2636]: E0514 23:54:36.170596 2636 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-948cf5cf6-mfgt6_calico-apiserver(dbd41afb-946c-4f95-a556-b712c9bfb043)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-948cf5cf6-mfgt6_calico-apiserver(dbd41afb-946c-4f95-a556-b712c9bfb043)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2a18df2ef627ce72b669e8188106446d588a1b880c17928bb92e8d6b23f8e9a9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-948cf5cf6-mfgt6" podUID="dbd41afb-946c-4f95-a556-b712c9bfb043" May 14 23:54:36.172052 containerd[1458]: time="2025-05-14T23:54:36.172007519Z" level=error msg="Failed to destroy network for sandbox \"7a807a4fb8d7370a237b6d6dd207ab4e10eef4e1309d7532f13a37352406567a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:54:36.172805 containerd[1458]: time="2025-05-14T23:54:36.172766580Z" level=error msg="encountered an error cleaning up failed sandbox \"7a807a4fb8d7370a237b6d6dd207ab4e10eef4e1309d7532f13a37352406567a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:54:36.172856 containerd[1458]: time="2025-05-14T23:54:36.172829298Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-948cf5cf6-nxpmq,Uid:e2130025-45f8-46c5-b0f8-d9cf000b93a0,Namespace:calico-apiserver,Attempt:3,} failed, error" error="failed to setup network for sandbox \"7a807a4fb8d7370a237b6d6dd207ab4e10eef4e1309d7532f13a37352406567a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:54:36.173107 kubelet[2636]: E0514 23:54:36.173062 2636 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7a807a4fb8d7370a237b6d6dd207ab4e10eef4e1309d7532f13a37352406567a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:54:36.173209 kubelet[2636]: E0514 23:54:36.173113 2636 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7a807a4fb8d7370a237b6d6dd207ab4e10eef4e1309d7532f13a37352406567a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-948cf5cf6-nxpmq" May 14 23:54:36.173209 kubelet[2636]: E0514 23:54:36.173131 2636 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7a807a4fb8d7370a237b6d6dd207ab4e10eef4e1309d7532f13a37352406567a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-948cf5cf6-nxpmq" May 14 23:54:36.173209 kubelet[2636]: E0514 23:54:36.173159 2636 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-948cf5cf6-nxpmq_calico-apiserver(e2130025-45f8-46c5-b0f8-d9cf000b93a0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-948cf5cf6-nxpmq_calico-apiserver(e2130025-45f8-46c5-b0f8-d9cf000b93a0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7a807a4fb8d7370a237b6d6dd207ab4e10eef4e1309d7532f13a37352406567a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-948cf5cf6-nxpmq" podUID="e2130025-45f8-46c5-b0f8-d9cf000b93a0" May 14 23:54:36.180862 containerd[1458]: time="2025-05-14T23:54:36.180818380Z" level=error msg="Failed to destroy network for sandbox \"7313f0b18b2336ff83c1906ff4b6136203180e9f01e834d5a22cfef344d02300\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:54:36.181172 containerd[1458]: time="2025-05-14T23:54:36.181136172Z" level=error msg="encountered an error cleaning up failed sandbox \"7313f0b18b2336ff83c1906ff4b6136203180e9f01e834d5a22cfef344d02300\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:54:36.181220 containerd[1458]: time="2025-05-14T23:54:36.181191650Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-55976b5db9-6xq9g,Uid:8967758b-5095-4f47-a879-8fbab63daefc,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"7313f0b18b2336ff83c1906ff4b6136203180e9f01e834d5a22cfef344d02300\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:54:36.181514 kubelet[2636]: E0514 23:54:36.181427 2636 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7313f0b18b2336ff83c1906ff4b6136203180e9f01e834d5a22cfef344d02300\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 23:54:36.181623 kubelet[2636]: E0514 23:54:36.181595 2636 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7313f0b18b2336ff83c1906ff4b6136203180e9f01e834d5a22cfef344d02300\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-55976b5db9-6xq9g" May 14 23:54:36.181663 kubelet[2636]: E0514 23:54:36.181629 2636 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7313f0b18b2336ff83c1906ff4b6136203180e9f01e834d5a22cfef344d02300\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-55976b5db9-6xq9g" May 14 23:54:36.181716 kubelet[2636]: E0514 23:54:36.181683 2636 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-55976b5db9-6xq9g_calico-system(8967758b-5095-4f47-a879-8fbab63daefc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-55976b5db9-6xq9g_calico-system(8967758b-5095-4f47-a879-8fbab63daefc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7313f0b18b2336ff83c1906ff4b6136203180e9f01e834d5a22cfef344d02300\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-55976b5db9-6xq9g" podUID="8967758b-5095-4f47-a879-8fbab63daefc" May 14 23:54:36.244186 containerd[1458]: time="2025-05-14T23:54:36.244120047Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:54:36.244725 containerd[1458]: time="2025-05-14T23:54:36.244675114Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.3: active requests=0, bytes read=138981893" May 14 23:54:36.246346 containerd[1458]: time="2025-05-14T23:54:36.245836045Z" level=info msg="ImageCreate event name:\"sha256:cdcce3ec4624a24c28cdc07b0ee29ddf6703628edee7452a3f8a8b4816bfd057\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:54:36.249374 containerd[1458]: time="2025-05-14T23:54:36.249327278Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.3\" with image id \"sha256:cdcce3ec4624a24c28cdc07b0ee29ddf6703628edee7452a3f8a8b4816bfd057\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\", size \"138981755\" in 3.610115519s" May 14 23:54:36.249461 containerd[1458]: time="2025-05-14T23:54:36.249384957Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\" returns image reference \"sha256:cdcce3ec4624a24c28cdc07b0ee29ddf6703628edee7452a3f8a8b4816bfd057\"" May 14 23:54:36.250573 containerd[1458]: time="2025-05-14T23:54:36.250074619Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:54:36.260025 containerd[1458]: time="2025-05-14T23:54:36.259983013Z" level=info msg="CreateContainer within sandbox \"310ff5fb55410f4fa79f80560079d320cec907dd8b9d461034f5b8ddcf3c99b2\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" May 14 23:54:36.275408 containerd[1458]: time="2025-05-14T23:54:36.275359071Z" level=info msg="CreateContainer within sandbox \"310ff5fb55410f4fa79f80560079d320cec907dd8b9d461034f5b8ddcf3c99b2\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"6b493e470d2ede2b262e359c39742e461c899ba147eb0b265250624c1be9cbed\"" May 14 23:54:36.275970 containerd[1458]: time="2025-05-14T23:54:36.275924657Z" level=info msg="StartContainer for \"6b493e470d2ede2b262e359c39742e461c899ba147eb0b265250624c1be9cbed\"" May 14 23:54:36.338775 systemd[1]: Started cri-containerd-6b493e470d2ede2b262e359c39742e461c899ba147eb0b265250624c1be9cbed.scope - libcontainer container 6b493e470d2ede2b262e359c39742e461c899ba147eb0b265250624c1be9cbed. May 14 23:54:36.377477 containerd[1458]: time="2025-05-14T23:54:36.376855590Z" level=info msg="StartContainer for \"6b493e470d2ede2b262e359c39742e461c899ba147eb0b265250624c1be9cbed\" returns successfully" May 14 23:54:36.590844 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. May 14 23:54:36.590936 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. May 14 23:54:36.728704 kubelet[2636]: I0514 23:54:36.728676 2636 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7313f0b18b2336ff83c1906ff4b6136203180e9f01e834d5a22cfef344d02300" May 14 23:54:36.729265 containerd[1458]: time="2025-05-14T23:54:36.729232718Z" level=info msg="StopPodSandbox for \"7313f0b18b2336ff83c1906ff4b6136203180e9f01e834d5a22cfef344d02300\"" May 14 23:54:36.729414 containerd[1458]: time="2025-05-14T23:54:36.729398034Z" level=info msg="Ensure that sandbox 7313f0b18b2336ff83c1906ff4b6136203180e9f01e834d5a22cfef344d02300 in task-service has been cleanup successfully" May 14 23:54:36.732379 kubelet[2636]: I0514 23:54:36.732354 2636 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d4994a3fe97aedde1b1f50d461d41e37685e1713cc1098d2aa9daa494cf0e469" May 14 23:54:36.733047 containerd[1458]: time="2025-05-14T23:54:36.732924626Z" level=info msg="StopPodSandbox for \"d4994a3fe97aedde1b1f50d461d41e37685e1713cc1098d2aa9daa494cf0e469\"" May 14 23:54:36.733461 containerd[1458]: time="2025-05-14T23:54:36.733106342Z" level=info msg="Ensure that sandbox d4994a3fe97aedde1b1f50d461d41e37685e1713cc1098d2aa9daa494cf0e469 in task-service has been cleanup successfully" May 14 23:54:36.733746 containerd[1458]: time="2025-05-14T23:54:36.733721486Z" level=info msg="TearDown network for sandbox \"d4994a3fe97aedde1b1f50d461d41e37685e1713cc1098d2aa9daa494cf0e469\" successfully" May 14 23:54:36.733778 containerd[1458]: time="2025-05-14T23:54:36.733745806Z" level=info msg="StopPodSandbox for \"d4994a3fe97aedde1b1f50d461d41e37685e1713cc1098d2aa9daa494cf0e469\" returns successfully" May 14 23:54:36.735474 containerd[1458]: time="2025-05-14T23:54:36.735446643Z" level=info msg="StopPodSandbox for \"cc6f049b3aae03c0fec0542f838c43b74900d5312469ce9addfb3b82fd22b201\"" May 14 23:54:36.735612 containerd[1458]: time="2025-05-14T23:54:36.735558001Z" level=info msg="TearDown network for sandbox \"cc6f049b3aae03c0fec0542f838c43b74900d5312469ce9addfb3b82fd22b201\" successfully" May 14 23:54:36.735612 containerd[1458]: time="2025-05-14T23:54:36.735573040Z" level=info msg="StopPodSandbox for \"cc6f049b3aae03c0fec0542f838c43b74900d5312469ce9addfb3b82fd22b201\" returns successfully" May 14 23:54:36.736472 containerd[1458]: time="2025-05-14T23:54:36.736325542Z" level=info msg="StopPodSandbox for \"4e30068388552149fd1dd8a8bcbdeae3234ce1cf01252084cb9fd7e718ae0e43\"" May 14 23:54:36.736472 containerd[1458]: time="2025-05-14T23:54:36.736415979Z" level=info msg="TearDown network for sandbox \"4e30068388552149fd1dd8a8bcbdeae3234ce1cf01252084cb9fd7e718ae0e43\" successfully" May 14 23:54:36.736472 containerd[1458]: time="2025-05-14T23:54:36.736426899Z" level=info msg="StopPodSandbox for \"4e30068388552149fd1dd8a8bcbdeae3234ce1cf01252084cb9fd7e718ae0e43\" returns successfully" May 14 23:54:36.737592 containerd[1458]: time="2025-05-14T23:54:36.737055004Z" level=info msg="StopPodSandbox for \"2acec14f0cbe05c7aa548bdcd1bc51128c3959fd460322326d86d668c75644a6\"" May 14 23:54:36.738474 containerd[1458]: time="2025-05-14T23:54:36.738311092Z" level=info msg="TearDown network for sandbox \"7313f0b18b2336ff83c1906ff4b6136203180e9f01e834d5a22cfef344d02300\" successfully" May 14 23:54:36.738474 containerd[1458]: time="2025-05-14T23:54:36.738335772Z" level=info msg="StopPodSandbox for \"7313f0b18b2336ff83c1906ff4b6136203180e9f01e834d5a22cfef344d02300\" returns successfully" May 14 23:54:36.738474 containerd[1458]: time="2025-05-14T23:54:36.738341732Z" level=info msg="TearDown network for sandbox \"2acec14f0cbe05c7aa548bdcd1bc51128c3959fd460322326d86d668c75644a6\" successfully" May 14 23:54:36.738474 containerd[1458]: time="2025-05-14T23:54:36.738383131Z" level=info msg="StopPodSandbox for \"2acec14f0cbe05c7aa548bdcd1bc51128c3959fd460322326d86d668c75644a6\" returns successfully" May 14 23:54:36.742035 containerd[1458]: time="2025-05-14T23:54:36.741295858Z" level=info msg="StopPodSandbox for \"e7d06f4651ae11a7f4134e69e0346a46cf053229c492d7a7a8c2189baf27e9e7\"" May 14 23:54:36.742035 containerd[1458]: time="2025-05-14T23:54:36.741399776Z" level=info msg="TearDown network for sandbox \"e7d06f4651ae11a7f4134e69e0346a46cf053229c492d7a7a8c2189baf27e9e7\" successfully" May 14 23:54:36.742035 containerd[1458]: time="2025-05-14T23:54:36.741409735Z" level=info msg="StopPodSandbox for \"e7d06f4651ae11a7f4134e69e0346a46cf053229c492d7a7a8c2189baf27e9e7\" returns successfully" May 14 23:54:36.742035 containerd[1458]: time="2025-05-14T23:54:36.741311098Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-b47sh,Uid:f03276de-018a-4c35-8ec5-8ee9060e84e4,Namespace:kube-system,Attempt:4,}" May 14 23:54:36.742217 containerd[1458]: time="2025-05-14T23:54:36.742044600Z" level=info msg="StopPodSandbox for \"9ea9ec64e4bb1d03aff564342d1365b5c979d1b4c871a06f698e17f49c805e09\"" May 14 23:54:36.742217 containerd[1458]: time="2025-05-14T23:54:36.742119558Z" level=info msg="TearDown network for sandbox \"9ea9ec64e4bb1d03aff564342d1365b5c979d1b4c871a06f698e17f49c805e09\" successfully" May 14 23:54:36.742217 containerd[1458]: time="2025-05-14T23:54:36.742128918Z" level=info msg="StopPodSandbox for \"9ea9ec64e4bb1d03aff564342d1365b5c979d1b4c871a06f698e17f49c805e09\" returns successfully" May 14 23:54:36.743433 containerd[1458]: time="2025-05-14T23:54:36.742720063Z" level=info msg="StopPodSandbox for \"33300e42d9dd9e82c101350186c488335e8fd17e979f85ac1b8f8c792dccbae8\"" May 14 23:54:36.743433 containerd[1458]: time="2025-05-14T23:54:36.742801901Z" level=info msg="TearDown network for sandbox \"33300e42d9dd9e82c101350186c488335e8fd17e979f85ac1b8f8c792dccbae8\" successfully" May 14 23:54:36.743433 containerd[1458]: time="2025-05-14T23:54:36.742811501Z" level=info msg="StopPodSandbox for \"33300e42d9dd9e82c101350186c488335e8fd17e979f85ac1b8f8c792dccbae8\" returns successfully" May 14 23:54:36.743433 containerd[1458]: time="2025-05-14T23:54:36.743354807Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-55976b5db9-6xq9g,Uid:8967758b-5095-4f47-a879-8fbab63daefc,Namespace:calico-system,Attempt:4,}" May 14 23:54:36.745171 kubelet[2636]: I0514 23:54:36.745120 2636 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7a807a4fb8d7370a237b6d6dd207ab4e10eef4e1309d7532f13a37352406567a" May 14 23:54:36.747185 containerd[1458]: time="2025-05-14T23:54:36.747115074Z" level=info msg="StopPodSandbox for \"7a807a4fb8d7370a237b6d6dd207ab4e10eef4e1309d7532f13a37352406567a\"" May 14 23:54:36.747405 containerd[1458]: time="2025-05-14T23:54:36.747293269Z" level=info msg="Ensure that sandbox 7a807a4fb8d7370a237b6d6dd207ab4e10eef4e1309d7532f13a37352406567a in task-service has been cleanup successfully" May 14 23:54:36.749235 containerd[1458]: time="2025-05-14T23:54:36.749064345Z" level=info msg="TearDown network for sandbox \"7a807a4fb8d7370a237b6d6dd207ab4e10eef4e1309d7532f13a37352406567a\" successfully" May 14 23:54:36.749852 containerd[1458]: time="2025-05-14T23:54:36.749726369Z" level=info msg="StopPodSandbox for \"7a807a4fb8d7370a237b6d6dd207ab4e10eef4e1309d7532f13a37352406567a\" returns successfully" May 14 23:54:36.751291 containerd[1458]: time="2025-05-14T23:54:36.751036896Z" level=info msg="StopPodSandbox for \"2f38b5963a74e4a9a11faddf5c19d5dc8b4278f0741f1db7e46084dff223aed6\"" May 14 23:54:36.751291 containerd[1458]: time="2025-05-14T23:54:36.751159213Z" level=info msg="TearDown network for sandbox \"2f38b5963a74e4a9a11faddf5c19d5dc8b4278f0741f1db7e46084dff223aed6\" successfully" May 14 23:54:36.751291 containerd[1458]: time="2025-05-14T23:54:36.751170453Z" level=info msg="StopPodSandbox for \"2f38b5963a74e4a9a11faddf5c19d5dc8b4278f0741f1db7e46084dff223aed6\" returns successfully" May 14 23:54:36.751498 kubelet[2636]: I0514 23:54:36.751434 2636 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9eecfec659f3de06d2fe478adbb1bd614221f45bc96ed87dda589f41a8750028" May 14 23:54:36.751817 containerd[1458]: time="2025-05-14T23:54:36.751781718Z" level=info msg="StopPodSandbox for \"f4af3726f952fd76dc5fff58c58b1d7cce343f4d0d7c542795ad3cf3a2f8dd4b\"" May 14 23:54:36.751914 containerd[1458]: time="2025-05-14T23:54:36.751857636Z" level=info msg="TearDown network for sandbox \"f4af3726f952fd76dc5fff58c58b1d7cce343f4d0d7c542795ad3cf3a2f8dd4b\" successfully" May 14 23:54:36.751914 containerd[1458]: time="2025-05-14T23:54:36.751867796Z" level=info msg="StopPodSandbox for \"f4af3726f952fd76dc5fff58c58b1d7cce343f4d0d7c542795ad3cf3a2f8dd4b\" returns successfully" May 14 23:54:36.752744 containerd[1458]: time="2025-05-14T23:54:36.752348184Z" level=info msg="StopPodSandbox for \"03bf40f226371819ebf2af59750aec491c2dd2ab8a0480f6dccfa7c8e4859ac4\"" May 14 23:54:36.752744 containerd[1458]: time="2025-05-14T23:54:36.752429662Z" level=info msg="TearDown network for sandbox \"03bf40f226371819ebf2af59750aec491c2dd2ab8a0480f6dccfa7c8e4859ac4\" successfully" May 14 23:54:36.752744 containerd[1458]: time="2025-05-14T23:54:36.752439261Z" level=info msg="StopPodSandbox for \"03bf40f226371819ebf2af59750aec491c2dd2ab8a0480f6dccfa7c8e4859ac4\" returns successfully" May 14 23:54:36.753096 containerd[1458]: time="2025-05-14T23:54:36.753046486Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-948cf5cf6-nxpmq,Uid:e2130025-45f8-46c5-b0f8-d9cf000b93a0,Namespace:calico-apiserver,Attempt:4,}" May 14 23:54:36.755613 kubelet[2636]: I0514 23:54:36.755411 2636 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="25766839fa42ff8f6330755ddff16513a3d473d5b48564d30f0a8e5f7f8abe9d" May 14 23:54:36.758418 containerd[1458]: time="2025-05-14T23:54:36.757358139Z" level=info msg="StopPodSandbox for \"9eecfec659f3de06d2fe478adbb1bd614221f45bc96ed87dda589f41a8750028\"" May 14 23:54:36.758418 containerd[1458]: time="2025-05-14T23:54:36.757609693Z" level=info msg="StopPodSandbox for \"25766839fa42ff8f6330755ddff16513a3d473d5b48564d30f0a8e5f7f8abe9d\"" May 14 23:54:36.758900 containerd[1458]: time="2025-05-14T23:54:36.758666747Z" level=info msg="Ensure that sandbox 25766839fa42ff8f6330755ddff16513a3d473d5b48564d30f0a8e5f7f8abe9d in task-service has been cleanup successfully" May 14 23:54:36.759253 containerd[1458]: time="2025-05-14T23:54:36.759127775Z" level=info msg="Ensure that sandbox 9eecfec659f3de06d2fe478adbb1bd614221f45bc96ed87dda589f41a8750028 in task-service has been cleanup successfully" May 14 23:54:36.761613 containerd[1458]: time="2025-05-14T23:54:36.761582554Z" level=info msg="TearDown network for sandbox \"9eecfec659f3de06d2fe478adbb1bd614221f45bc96ed87dda589f41a8750028\" successfully" May 14 23:54:36.761939 containerd[1458]: time="2025-05-14T23:54:36.761914746Z" level=info msg="StopPodSandbox for \"9eecfec659f3de06d2fe478adbb1bd614221f45bc96ed87dda589f41a8750028\" returns successfully" May 14 23:54:36.762409 containerd[1458]: time="2025-05-14T23:54:36.761488117Z" level=info msg="TearDown network for sandbox \"25766839fa42ff8f6330755ddff16513a3d473d5b48564d30f0a8e5f7f8abe9d\" successfully" May 14 23:54:36.762713 containerd[1458]: time="2025-05-14T23:54:36.762691647Z" level=info msg="StopPodSandbox for \"25766839fa42ff8f6330755ddff16513a3d473d5b48564d30f0a8e5f7f8abe9d\" returns successfully" May 14 23:54:36.764382 containerd[1458]: time="2025-05-14T23:54:36.764264048Z" level=info msg="StopPodSandbox for \"23c766402a447913dea2632e3ae9916eb4a8b85d8ab4676f6d5f899207c95a30\"" May 14 23:54:36.765373 containerd[1458]: time="2025-05-14T23:54:36.765343421Z" level=info msg="TearDown network for sandbox \"23c766402a447913dea2632e3ae9916eb4a8b85d8ab4676f6d5f899207c95a30\" successfully" May 14 23:54:36.765643 containerd[1458]: time="2025-05-14T23:54:36.765448218Z" level=info msg="StopPodSandbox for \"23c766402a447913dea2632e3ae9916eb4a8b85d8ab4676f6d5f899207c95a30\" returns successfully" May 14 23:54:36.765643 containerd[1458]: time="2025-05-14T23:54:36.764731276Z" level=info msg="StopPodSandbox for \"47a87353b700cc5924d1a24d563b41468d8b4cd51a69dd30b4c6101bd99384f1\"" May 14 23:54:36.766001 containerd[1458]: time="2025-05-14T23:54:36.765800930Z" level=info msg="TearDown network for sandbox \"47a87353b700cc5924d1a24d563b41468d8b4cd51a69dd30b4c6101bd99384f1\" successfully" May 14 23:54:36.766001 containerd[1458]: time="2025-05-14T23:54:36.765819809Z" level=info msg="StopPodSandbox for \"47a87353b700cc5924d1a24d563b41468d8b4cd51a69dd30b4c6101bd99384f1\" returns successfully" May 14 23:54:36.767943 containerd[1458]: time="2025-05-14T23:54:36.766641709Z" level=info msg="StopPodSandbox for \"6069f7b6005f823a6c0f6bedf541af336a1e9360b68e369bfb47b5702685e544\"" May 14 23:54:36.767943 containerd[1458]: time="2025-05-14T23:54:36.766721747Z" level=info msg="TearDown network for sandbox \"6069f7b6005f823a6c0f6bedf541af336a1e9360b68e369bfb47b5702685e544\" successfully" May 14 23:54:36.767943 containerd[1458]: time="2025-05-14T23:54:36.766731866Z" level=info msg="StopPodSandbox for \"6069f7b6005f823a6c0f6bedf541af336a1e9360b68e369bfb47b5702685e544\" returns successfully" May 14 23:54:36.768330 containerd[1458]: time="2025-05-14T23:54:36.768288348Z" level=info msg="StopPodSandbox for \"87ca9c151aa61190fc4680342f5a4b03631cdb8646efa58c4ac2b140adfb783c\"" May 14 23:54:36.768435 containerd[1458]: time="2025-05-14T23:54:36.768382705Z" level=info msg="TearDown network for sandbox \"87ca9c151aa61190fc4680342f5a4b03631cdb8646efa58c4ac2b140adfb783c\" successfully" May 14 23:54:36.768435 containerd[1458]: time="2025-05-14T23:54:36.768392505Z" level=info msg="StopPodSandbox for \"87ca9c151aa61190fc4680342f5a4b03631cdb8646efa58c4ac2b140adfb783c\" returns successfully" May 14 23:54:36.768435 containerd[1458]: time="2025-05-14T23:54:36.768433384Z" level=info msg="StopPodSandbox for \"a7cff0ac920af035be360364ad9f3490cfa6439ba3727ac87d628b930665cdb4\"" May 14 23:54:36.768507 containerd[1458]: time="2025-05-14T23:54:36.768481863Z" level=info msg="TearDown network for sandbox \"a7cff0ac920af035be360364ad9f3490cfa6439ba3727ac87d628b930665cdb4\" successfully" May 14 23:54:36.768507 containerd[1458]: time="2025-05-14T23:54:36.768490543Z" level=info msg="StopPodSandbox for \"a7cff0ac920af035be360364ad9f3490cfa6439ba3727ac87d628b930665cdb4\" returns successfully" May 14 23:54:36.769318 kubelet[2636]: I0514 23:54:36.769255 2636 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2a18df2ef627ce72b669e8188106446d588a1b880c17928bb92e8d6b23f8e9a9" May 14 23:54:36.772886 kubelet[2636]: I0514 23:54:36.772820 2636 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-vsql8" podStartSLOduration=1.264938347 podStartE2EDuration="11.772803956s" podCreationTimestamp="2025-05-14 23:54:25 +0000 UTC" firstStartedPulling="2025-05-14 23:54:25.742354887 +0000 UTC m=+21.289663812" lastFinishedPulling="2025-05-14 23:54:36.250220496 +0000 UTC m=+31.797529421" observedRunningTime="2025-05-14 23:54:36.772271489 +0000 UTC m=+32.319580454" watchObservedRunningTime="2025-05-14 23:54:36.772803956 +0000 UTC m=+32.320112921" May 14 23:54:36.776778 containerd[1458]: time="2025-05-14T23:54:36.776694499Z" level=info msg="StopPodSandbox for \"411442b62549cd703144cf787d59ef4468de0c18b262472a2ba1f8b0af7ab32f\"" May 14 23:54:36.777943 containerd[1458]: time="2025-05-14T23:54:36.776861895Z" level=info msg="StopPodSandbox for \"2a18df2ef627ce72b669e8188106446d588a1b880c17928bb92e8d6b23f8e9a9\"" May 14 23:54:36.778164 containerd[1458]: time="2025-05-14T23:54:36.778130583Z" level=info msg="Ensure that sandbox 2a18df2ef627ce72b669e8188106446d588a1b880c17928bb92e8d6b23f8e9a9 in task-service has been cleanup successfully" May 14 23:54:36.778504 containerd[1458]: time="2025-05-14T23:54:36.778445655Z" level=info msg="TearDown network for sandbox \"411442b62549cd703144cf787d59ef4468de0c18b262472a2ba1f8b0af7ab32f\" successfully" May 14 23:54:36.778504 containerd[1458]: time="2025-05-14T23:54:36.778471055Z" level=info msg="StopPodSandbox for \"411442b62549cd703144cf787d59ef4468de0c18b262472a2ba1f8b0af7ab32f\" returns successfully" May 14 23:54:36.779145 containerd[1458]: time="2025-05-14T23:54:36.778146703Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xq692,Uid:78b8ecbd-ebea-41ed-a71c-8ddd96e45e21,Namespace:calico-system,Attempt:4,}" May 14 23:54:36.779900 containerd[1458]: time="2025-05-14T23:54:36.779421911Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-m9489,Uid:5296ee56-ba99-4940-a157-166e05449d2c,Namespace:kube-system,Attempt:4,}" May 14 23:54:36.779900 containerd[1458]: time="2025-05-14T23:54:36.779675985Z" level=info msg="TearDown network for sandbox \"2a18df2ef627ce72b669e8188106446d588a1b880c17928bb92e8d6b23f8e9a9\" successfully" May 14 23:54:36.779900 containerd[1458]: time="2025-05-14T23:54:36.779694544Z" level=info msg="StopPodSandbox for \"2a18df2ef627ce72b669e8188106446d588a1b880c17928bb92e8d6b23f8e9a9\" returns successfully" May 14 23:54:36.780025 containerd[1458]: time="2025-05-14T23:54:36.779994617Z" level=info msg="StopPodSandbox for \"b44bd98b2f5b55a411d3b6eaa96fa5c4f7845d240ab9151a0da7a9f9ef7ca4de\"" May 14 23:54:36.780116 containerd[1458]: time="2025-05-14T23:54:36.780091775Z" level=info msg="TearDown network for sandbox \"b44bd98b2f5b55a411d3b6eaa96fa5c4f7845d240ab9151a0da7a9f9ef7ca4de\" successfully" May 14 23:54:36.780116 containerd[1458]: time="2025-05-14T23:54:36.780108414Z" level=info msg="StopPodSandbox for \"b44bd98b2f5b55a411d3b6eaa96fa5c4f7845d240ab9151a0da7a9f9ef7ca4de\" returns successfully" May 14 23:54:36.781642 containerd[1458]: time="2025-05-14T23:54:36.781036551Z" level=info msg="StopPodSandbox for \"fb6ec1ae0429d0a14ae769b7e263fe479f5e616c184463c2a1dcbd8adbdf81d4\"" May 14 23:54:36.781642 containerd[1458]: time="2025-05-14T23:54:36.781244066Z" level=info msg="TearDown network for sandbox \"fb6ec1ae0429d0a14ae769b7e263fe479f5e616c184463c2a1dcbd8adbdf81d4\" successfully" May 14 23:54:36.781642 containerd[1458]: time="2025-05-14T23:54:36.781259306Z" level=info msg="StopPodSandbox for \"fb6ec1ae0429d0a14ae769b7e263fe479f5e616c184463c2a1dcbd8adbdf81d4\" returns successfully" May 14 23:54:36.784193 containerd[1458]: time="2025-05-14T23:54:36.784091755Z" level=info msg="StopPodSandbox for \"7baa3219b9d67adfc4dc62350bf1c4169d2538001b9b6678280842b71c445e26\"" May 14 23:54:36.784193 containerd[1458]: time="2025-05-14T23:54:36.784196193Z" level=info msg="TearDown network for sandbox \"7baa3219b9d67adfc4dc62350bf1c4169d2538001b9b6678280842b71c445e26\" successfully" May 14 23:54:36.784287 containerd[1458]: time="2025-05-14T23:54:36.784205912Z" level=info msg="StopPodSandbox for \"7baa3219b9d67adfc4dc62350bf1c4169d2538001b9b6678280842b71c445e26\" returns successfully" May 14 23:54:36.793954 containerd[1458]: time="2025-05-14T23:54:36.793529561Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-948cf5cf6-mfgt6,Uid:dbd41afb-946c-4f95-a556-b712c9bfb043,Namespace:calico-apiserver,Attempt:4,}" May 14 23:54:36.874416 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7313f0b18b2336ff83c1906ff4b6136203180e9f01e834d5a22cfef344d02300-shm.mount: Deactivated successfully. May 14 23:54:36.874519 systemd[1]: run-netns-cni\x2d2c17c9ba\x2d4a39\x2da3e0\x2d41ef\x2dac227a335af6.mount: Deactivated successfully. May 14 23:54:36.874584 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2a18df2ef627ce72b669e8188106446d588a1b880c17928bb92e8d6b23f8e9a9-shm.mount: Deactivated successfully. May 14 23:54:36.874633 systemd[1]: run-netns-cni\x2ddf0224db\x2dafb6\x2d5d85\x2d737d\x2db3d43db9d4ee.mount: Deactivated successfully. May 14 23:54:36.874720 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7a807a4fb8d7370a237b6d6dd207ab4e10eef4e1309d7532f13a37352406567a-shm.mount: Deactivated successfully. May 14 23:54:36.874780 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1875310046.mount: Deactivated successfully. May 14 23:54:37.327695 systemd-networkd[1388]: cali0218a991330: Link UP May 14 23:54:37.327913 systemd-networkd[1388]: cali0218a991330: Gained carrier May 14 23:54:37.341876 containerd[1458]: 2025-05-14 23:54:36.809 [INFO][4348] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 14 23:54:37.341876 containerd[1458]: 2025-05-14 23:54:37.020 [INFO][4348] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7db6d8ff4d--b47sh-eth0 coredns-7db6d8ff4d- kube-system f03276de-018a-4c35-8ec5-8ee9060e84e4 692 0 2025-05-14 23:54:19 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7db6d8ff4d-b47sh eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali0218a991330 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="40f379bf3ec8d22f663a1f59695605078b15698b8b8eb30e611adbcca1024770" Namespace="kube-system" Pod="coredns-7db6d8ff4d-b47sh" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--b47sh-" May 14 23:54:37.341876 containerd[1458]: 2025-05-14 23:54:37.021 [INFO][4348] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="40f379bf3ec8d22f663a1f59695605078b15698b8b8eb30e611adbcca1024770" Namespace="kube-system" Pod="coredns-7db6d8ff4d-b47sh" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--b47sh-eth0" May 14 23:54:37.341876 containerd[1458]: 2025-05-14 23:54:37.254 [INFO][4470] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="40f379bf3ec8d22f663a1f59695605078b15698b8b8eb30e611adbcca1024770" HandleID="k8s-pod-network.40f379bf3ec8d22f663a1f59695605078b15698b8b8eb30e611adbcca1024770" Workload="localhost-k8s-coredns--7db6d8ff4d--b47sh-eth0" May 14 23:54:37.341876 containerd[1458]: 2025-05-14 23:54:37.277 [INFO][4470] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="40f379bf3ec8d22f663a1f59695605078b15698b8b8eb30e611adbcca1024770" HandleID="k8s-pod-network.40f379bf3ec8d22f663a1f59695605078b15698b8b8eb30e611adbcca1024770" Workload="localhost-k8s-coredns--7db6d8ff4d--b47sh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002dd550), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7db6d8ff4d-b47sh", "timestamp":"2025-05-14 23:54:37.254789869 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 14 23:54:37.341876 containerd[1458]: 2025-05-14 23:54:37.277 [INFO][4470] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 14 23:54:37.341876 containerd[1458]: 2025-05-14 23:54:37.277 [INFO][4470] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 14 23:54:37.341876 containerd[1458]: 2025-05-14 23:54:37.279 [INFO][4470] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 14 23:54:37.341876 containerd[1458]: 2025-05-14 23:54:37.282 [INFO][4470] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.40f379bf3ec8d22f663a1f59695605078b15698b8b8eb30e611adbcca1024770" host="localhost" May 14 23:54:37.341876 containerd[1458]: 2025-05-14 23:54:37.296 [INFO][4470] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 14 23:54:37.341876 containerd[1458]: 2025-05-14 23:54:37.302 [INFO][4470] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 14 23:54:37.341876 containerd[1458]: 2025-05-14 23:54:37.304 [INFO][4470] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 14 23:54:37.341876 containerd[1458]: 2025-05-14 23:54:37.306 [INFO][4470] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 14 23:54:37.341876 containerd[1458]: 2025-05-14 23:54:37.306 [INFO][4470] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.40f379bf3ec8d22f663a1f59695605078b15698b8b8eb30e611adbcca1024770" host="localhost" May 14 23:54:37.341876 containerd[1458]: 2025-05-14 23:54:37.307 [INFO][4470] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.40f379bf3ec8d22f663a1f59695605078b15698b8b8eb30e611adbcca1024770 May 14 23:54:37.341876 containerd[1458]: 2025-05-14 23:54:37.311 [INFO][4470] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.40f379bf3ec8d22f663a1f59695605078b15698b8b8eb30e611adbcca1024770" host="localhost" May 14 23:54:37.341876 containerd[1458]: 2025-05-14 23:54:37.315 [INFO][4470] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.40f379bf3ec8d22f663a1f59695605078b15698b8b8eb30e611adbcca1024770" host="localhost" May 14 23:54:37.341876 containerd[1458]: 2025-05-14 23:54:37.315 [INFO][4470] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.40f379bf3ec8d22f663a1f59695605078b15698b8b8eb30e611adbcca1024770" host="localhost" May 14 23:54:37.341876 containerd[1458]: 2025-05-14 23:54:37.315 [INFO][4470] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 14 23:54:37.341876 containerd[1458]: 2025-05-14 23:54:37.315 [INFO][4470] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="40f379bf3ec8d22f663a1f59695605078b15698b8b8eb30e611adbcca1024770" HandleID="k8s-pod-network.40f379bf3ec8d22f663a1f59695605078b15698b8b8eb30e611adbcca1024770" Workload="localhost-k8s-coredns--7db6d8ff4d--b47sh-eth0" May 14 23:54:37.345982 containerd[1458]: 2025-05-14 23:54:37.318 [INFO][4348] cni-plugin/k8s.go 386: Populated endpoint ContainerID="40f379bf3ec8d22f663a1f59695605078b15698b8b8eb30e611adbcca1024770" Namespace="kube-system" Pod="coredns-7db6d8ff4d-b47sh" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--b47sh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--b47sh-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"f03276de-018a-4c35-8ec5-8ee9060e84e4", ResourceVersion:"692", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 23, 54, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7db6d8ff4d-b47sh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0218a991330", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 23:54:37.345982 containerd[1458]: 2025-05-14 23:54:37.318 [INFO][4348] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="40f379bf3ec8d22f663a1f59695605078b15698b8b8eb30e611adbcca1024770" Namespace="kube-system" Pod="coredns-7db6d8ff4d-b47sh" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--b47sh-eth0" May 14 23:54:37.345982 containerd[1458]: 2025-05-14 23:54:37.318 [INFO][4348] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0218a991330 ContainerID="40f379bf3ec8d22f663a1f59695605078b15698b8b8eb30e611adbcca1024770" Namespace="kube-system" Pod="coredns-7db6d8ff4d-b47sh" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--b47sh-eth0" May 14 23:54:37.345982 containerd[1458]: 2025-05-14 23:54:37.327 [INFO][4348] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="40f379bf3ec8d22f663a1f59695605078b15698b8b8eb30e611adbcca1024770" Namespace="kube-system" Pod="coredns-7db6d8ff4d-b47sh" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--b47sh-eth0" May 14 23:54:37.345982 containerd[1458]: 2025-05-14 23:54:37.327 [INFO][4348] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="40f379bf3ec8d22f663a1f59695605078b15698b8b8eb30e611adbcca1024770" Namespace="kube-system" Pod="coredns-7db6d8ff4d-b47sh" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--b47sh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--b47sh-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"f03276de-018a-4c35-8ec5-8ee9060e84e4", ResourceVersion:"692", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 23, 54, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"40f379bf3ec8d22f663a1f59695605078b15698b8b8eb30e611adbcca1024770", Pod:"coredns-7db6d8ff4d-b47sh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0218a991330", MAC:"82:d8:35:5b:39:77", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 23:54:37.345982 containerd[1458]: 2025-05-14 23:54:37.339 [INFO][4348] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="40f379bf3ec8d22f663a1f59695605078b15698b8b8eb30e611adbcca1024770" Namespace="kube-system" Pod="coredns-7db6d8ff4d-b47sh" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--b47sh-eth0" May 14 23:54:37.367510 systemd-networkd[1388]: cali4bf7b453527: Link UP May 14 23:54:37.368235 systemd-networkd[1388]: cali4bf7b453527: Gained carrier May 14 23:54:37.380309 containerd[1458]: 2025-05-14 23:54:36.813 [INFO][4360] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 14 23:54:37.380309 containerd[1458]: 2025-05-14 23:54:37.016 [INFO][4360] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--55976b5db9--6xq9g-eth0 calico-kube-controllers-55976b5db9- calico-system 8967758b-5095-4f47-a879-8fbab63daefc 694 0 2025-05-14 23:54:25 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:55976b5db9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-55976b5db9-6xq9g eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali4bf7b453527 [] []}} ContainerID="981630902a1cc42bcb9bfc5d6d25e83f7822d38a04ec51791ae8f48ab29e1c3b" Namespace="calico-system" Pod="calico-kube-controllers-55976b5db9-6xq9g" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--55976b5db9--6xq9g-" May 14 23:54:37.380309 containerd[1458]: 2025-05-14 23:54:37.019 [INFO][4360] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="981630902a1cc42bcb9bfc5d6d25e83f7822d38a04ec51791ae8f48ab29e1c3b" Namespace="calico-system" Pod="calico-kube-controllers-55976b5db9-6xq9g" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--55976b5db9--6xq9g-eth0" May 14 23:54:37.380309 containerd[1458]: 2025-05-14 23:54:37.256 [INFO][4474] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="981630902a1cc42bcb9bfc5d6d25e83f7822d38a04ec51791ae8f48ab29e1c3b" HandleID="k8s-pod-network.981630902a1cc42bcb9bfc5d6d25e83f7822d38a04ec51791ae8f48ab29e1c3b" Workload="localhost-k8s-calico--kube--controllers--55976b5db9--6xq9g-eth0" May 14 23:54:37.380309 containerd[1458]: 2025-05-14 23:54:37.279 [INFO][4474] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="981630902a1cc42bcb9bfc5d6d25e83f7822d38a04ec51791ae8f48ab29e1c3b" HandleID="k8s-pod-network.981630902a1cc42bcb9bfc5d6d25e83f7822d38a04ec51791ae8f48ab29e1c3b" Workload="localhost-k8s-calico--kube--controllers--55976b5db9--6xq9g-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002db610), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-55976b5db9-6xq9g", "timestamp":"2025-05-14 23:54:37.256945177 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 14 23:54:37.380309 containerd[1458]: 2025-05-14 23:54:37.279 [INFO][4474] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 14 23:54:37.380309 containerd[1458]: 2025-05-14 23:54:37.315 [INFO][4474] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 14 23:54:37.380309 containerd[1458]: 2025-05-14 23:54:37.316 [INFO][4474] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 14 23:54:37.380309 containerd[1458]: 2025-05-14 23:54:37.318 [INFO][4474] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.981630902a1cc42bcb9bfc5d6d25e83f7822d38a04ec51791ae8f48ab29e1c3b" host="localhost" May 14 23:54:37.380309 containerd[1458]: 2025-05-14 23:54:37.323 [INFO][4474] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 14 23:54:37.380309 containerd[1458]: 2025-05-14 23:54:37.330 [INFO][4474] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 14 23:54:37.380309 containerd[1458]: 2025-05-14 23:54:37.333 [INFO][4474] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 14 23:54:37.380309 containerd[1458]: 2025-05-14 23:54:37.339 [INFO][4474] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 14 23:54:37.380309 containerd[1458]: 2025-05-14 23:54:37.340 [INFO][4474] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.981630902a1cc42bcb9bfc5d6d25e83f7822d38a04ec51791ae8f48ab29e1c3b" host="localhost" May 14 23:54:37.380309 containerd[1458]: 2025-05-14 23:54:37.344 [INFO][4474] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.981630902a1cc42bcb9bfc5d6d25e83f7822d38a04ec51791ae8f48ab29e1c3b May 14 23:54:37.380309 containerd[1458]: 2025-05-14 23:54:37.348 [INFO][4474] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.981630902a1cc42bcb9bfc5d6d25e83f7822d38a04ec51791ae8f48ab29e1c3b" host="localhost" May 14 23:54:37.380309 containerd[1458]: 2025-05-14 23:54:37.355 [INFO][4474] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.981630902a1cc42bcb9bfc5d6d25e83f7822d38a04ec51791ae8f48ab29e1c3b" host="localhost" May 14 23:54:37.380309 containerd[1458]: 2025-05-14 23:54:37.355 [INFO][4474] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.981630902a1cc42bcb9bfc5d6d25e83f7822d38a04ec51791ae8f48ab29e1c3b" host="localhost" May 14 23:54:37.380309 containerd[1458]: 2025-05-14 23:54:37.355 [INFO][4474] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 14 23:54:37.380309 containerd[1458]: 2025-05-14 23:54:37.355 [INFO][4474] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="981630902a1cc42bcb9bfc5d6d25e83f7822d38a04ec51791ae8f48ab29e1c3b" HandleID="k8s-pod-network.981630902a1cc42bcb9bfc5d6d25e83f7822d38a04ec51791ae8f48ab29e1c3b" Workload="localhost-k8s-calico--kube--controllers--55976b5db9--6xq9g-eth0" May 14 23:54:37.380969 containerd[1458]: 2025-05-14 23:54:37.361 [INFO][4360] cni-plugin/k8s.go 386: Populated endpoint ContainerID="981630902a1cc42bcb9bfc5d6d25e83f7822d38a04ec51791ae8f48ab29e1c3b" Namespace="calico-system" Pod="calico-kube-controllers-55976b5db9-6xq9g" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--55976b5db9--6xq9g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--55976b5db9--6xq9g-eth0", GenerateName:"calico-kube-controllers-55976b5db9-", Namespace:"calico-system", SelfLink:"", UID:"8967758b-5095-4f47-a879-8fbab63daefc", ResourceVersion:"694", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 23, 54, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"55976b5db9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-55976b5db9-6xq9g", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali4bf7b453527", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 23:54:37.380969 containerd[1458]: 2025-05-14 23:54:37.361 [INFO][4360] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="981630902a1cc42bcb9bfc5d6d25e83f7822d38a04ec51791ae8f48ab29e1c3b" Namespace="calico-system" Pod="calico-kube-controllers-55976b5db9-6xq9g" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--55976b5db9--6xq9g-eth0" May 14 23:54:37.380969 containerd[1458]: 2025-05-14 23:54:37.361 [INFO][4360] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4bf7b453527 ContainerID="981630902a1cc42bcb9bfc5d6d25e83f7822d38a04ec51791ae8f48ab29e1c3b" Namespace="calico-system" Pod="calico-kube-controllers-55976b5db9-6xq9g" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--55976b5db9--6xq9g-eth0" May 14 23:54:37.380969 containerd[1458]: 2025-05-14 23:54:37.367 [INFO][4360] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="981630902a1cc42bcb9bfc5d6d25e83f7822d38a04ec51791ae8f48ab29e1c3b" Namespace="calico-system" Pod="calico-kube-controllers-55976b5db9-6xq9g" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--55976b5db9--6xq9g-eth0" May 14 23:54:37.380969 containerd[1458]: 2025-05-14 23:54:37.368 [INFO][4360] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="981630902a1cc42bcb9bfc5d6d25e83f7822d38a04ec51791ae8f48ab29e1c3b" Namespace="calico-system" Pod="calico-kube-controllers-55976b5db9-6xq9g" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--55976b5db9--6xq9g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--55976b5db9--6xq9g-eth0", GenerateName:"calico-kube-controllers-55976b5db9-", Namespace:"calico-system", SelfLink:"", UID:"8967758b-5095-4f47-a879-8fbab63daefc", ResourceVersion:"694", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 23, 54, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"55976b5db9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"981630902a1cc42bcb9bfc5d6d25e83f7822d38a04ec51791ae8f48ab29e1c3b", Pod:"calico-kube-controllers-55976b5db9-6xq9g", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali4bf7b453527", MAC:"22:66:d9:b9:6e:46", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 23:54:37.380969 containerd[1458]: 2025-05-14 23:54:37.377 [INFO][4360] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="981630902a1cc42bcb9bfc5d6d25e83f7822d38a04ec51791ae8f48ab29e1c3b" Namespace="calico-system" Pod="calico-kube-controllers-55976b5db9-6xq9g" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--55976b5db9--6xq9g-eth0" May 14 23:54:37.397679 systemd-networkd[1388]: calib54471cd635: Link UP May 14 23:54:37.397890 systemd-networkd[1388]: calib54471cd635: Gained carrier May 14 23:54:37.410094 containerd[1458]: time="2025-05-14T23:54:37.409908061Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 14 23:54:37.410094 containerd[1458]: time="2025-05-14T23:54:37.409960860Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 14 23:54:37.410094 containerd[1458]: time="2025-05-14T23:54:37.409972060Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 23:54:37.411020 containerd[1458]: time="2025-05-14T23:54:37.410052778Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 23:54:37.411020 containerd[1458]: 2025-05-14 23:54:36.973 [INFO][4415] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 14 23:54:37.411020 containerd[1458]: 2025-05-14 23:54:37.018 [INFO][4415] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--xq692-eth0 csi-node-driver- calico-system 78b8ecbd-ebea-41ed-a71c-8ddd96e45e21 626 0 2025-05-14 23:54:25 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:55b7b4b9d k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-xq692 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calib54471cd635 [] []}} ContainerID="3774646ffcbf8bdaae2439a54e856a96a11fe773b0e0a3316688ee67ae1e43e6" Namespace="calico-system" Pod="csi-node-driver-xq692" WorkloadEndpoint="localhost-k8s-csi--node--driver--xq692-" May 14 23:54:37.411020 containerd[1458]: 2025-05-14 23:54:37.019 [INFO][4415] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="3774646ffcbf8bdaae2439a54e856a96a11fe773b0e0a3316688ee67ae1e43e6" Namespace="calico-system" Pod="csi-node-driver-xq692" WorkloadEndpoint="localhost-k8s-csi--node--driver--xq692-eth0" May 14 23:54:37.411020 containerd[1458]: 2025-05-14 23:54:37.261 [INFO][4473] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3774646ffcbf8bdaae2439a54e856a96a11fe773b0e0a3316688ee67ae1e43e6" HandleID="k8s-pod-network.3774646ffcbf8bdaae2439a54e856a96a11fe773b0e0a3316688ee67ae1e43e6" Workload="localhost-k8s-csi--node--driver--xq692-eth0" May 14 23:54:37.411020 containerd[1458]: 2025-05-14 23:54:37.280 [INFO][4473] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3774646ffcbf8bdaae2439a54e856a96a11fe773b0e0a3316688ee67ae1e43e6" HandleID="k8s-pod-network.3774646ffcbf8bdaae2439a54e856a96a11fe773b0e0a3316688ee67ae1e43e6" Workload="localhost-k8s-csi--node--driver--xq692-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d9240), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-xq692", "timestamp":"2025-05-14 23:54:37.261611545 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 14 23:54:37.411020 containerd[1458]: 2025-05-14 23:54:37.280 [INFO][4473] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 14 23:54:37.411020 containerd[1458]: 2025-05-14 23:54:37.356 [INFO][4473] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 14 23:54:37.411020 containerd[1458]: 2025-05-14 23:54:37.356 [INFO][4473] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 14 23:54:37.411020 containerd[1458]: 2025-05-14 23:54:37.359 [INFO][4473] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.3774646ffcbf8bdaae2439a54e856a96a11fe773b0e0a3316688ee67ae1e43e6" host="localhost" May 14 23:54:37.411020 containerd[1458]: 2025-05-14 23:54:37.363 [INFO][4473] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 14 23:54:37.411020 containerd[1458]: 2025-05-14 23:54:37.369 [INFO][4473] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 14 23:54:37.411020 containerd[1458]: 2025-05-14 23:54:37.370 [INFO][4473] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 14 23:54:37.411020 containerd[1458]: 2025-05-14 23:54:37.373 [INFO][4473] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 14 23:54:37.411020 containerd[1458]: 2025-05-14 23:54:37.373 [INFO][4473] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3774646ffcbf8bdaae2439a54e856a96a11fe773b0e0a3316688ee67ae1e43e6" host="localhost" May 14 23:54:37.411020 containerd[1458]: 2025-05-14 23:54:37.376 [INFO][4473] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.3774646ffcbf8bdaae2439a54e856a96a11fe773b0e0a3316688ee67ae1e43e6 May 14 23:54:37.411020 containerd[1458]: 2025-05-14 23:54:37.381 [INFO][4473] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3774646ffcbf8bdaae2439a54e856a96a11fe773b0e0a3316688ee67ae1e43e6" host="localhost" May 14 23:54:37.411020 containerd[1458]: 2025-05-14 23:54:37.387 [INFO][4473] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.3774646ffcbf8bdaae2439a54e856a96a11fe773b0e0a3316688ee67ae1e43e6" host="localhost" May 14 23:54:37.411020 containerd[1458]: 2025-05-14 23:54:37.388 [INFO][4473] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.3774646ffcbf8bdaae2439a54e856a96a11fe773b0e0a3316688ee67ae1e43e6" host="localhost" May 14 23:54:37.411020 containerd[1458]: 2025-05-14 23:54:37.388 [INFO][4473] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 14 23:54:37.411020 containerd[1458]: 2025-05-14 23:54:37.388 [INFO][4473] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="3774646ffcbf8bdaae2439a54e856a96a11fe773b0e0a3316688ee67ae1e43e6" HandleID="k8s-pod-network.3774646ffcbf8bdaae2439a54e856a96a11fe773b0e0a3316688ee67ae1e43e6" Workload="localhost-k8s-csi--node--driver--xq692-eth0" May 14 23:54:37.411527 containerd[1458]: 2025-05-14 23:54:37.390 [INFO][4415] cni-plugin/k8s.go 386: Populated endpoint ContainerID="3774646ffcbf8bdaae2439a54e856a96a11fe773b0e0a3316688ee67ae1e43e6" Namespace="calico-system" Pod="csi-node-driver-xq692" WorkloadEndpoint="localhost-k8s-csi--node--driver--xq692-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--xq692-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"78b8ecbd-ebea-41ed-a71c-8ddd96e45e21", ResourceVersion:"626", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 23, 54, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-xq692", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calib54471cd635", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 23:54:37.411527 containerd[1458]: 2025-05-14 23:54:37.390 [INFO][4415] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="3774646ffcbf8bdaae2439a54e856a96a11fe773b0e0a3316688ee67ae1e43e6" Namespace="calico-system" Pod="csi-node-driver-xq692" WorkloadEndpoint="localhost-k8s-csi--node--driver--xq692-eth0" May 14 23:54:37.411527 containerd[1458]: 2025-05-14 23:54:37.390 [INFO][4415] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib54471cd635 ContainerID="3774646ffcbf8bdaae2439a54e856a96a11fe773b0e0a3316688ee67ae1e43e6" Namespace="calico-system" Pod="csi-node-driver-xq692" WorkloadEndpoint="localhost-k8s-csi--node--driver--xq692-eth0" May 14 23:54:37.411527 containerd[1458]: 2025-05-14 23:54:37.396 [INFO][4415] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3774646ffcbf8bdaae2439a54e856a96a11fe773b0e0a3316688ee67ae1e43e6" Namespace="calico-system" Pod="csi-node-driver-xq692" WorkloadEndpoint="localhost-k8s-csi--node--driver--xq692-eth0" May 14 23:54:37.411527 containerd[1458]: 2025-05-14 23:54:37.396 [INFO][4415] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="3774646ffcbf8bdaae2439a54e856a96a11fe773b0e0a3316688ee67ae1e43e6" Namespace="calico-system" Pod="csi-node-driver-xq692" WorkloadEndpoint="localhost-k8s-csi--node--driver--xq692-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--xq692-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"78b8ecbd-ebea-41ed-a71c-8ddd96e45e21", ResourceVersion:"626", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 23, 54, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3774646ffcbf8bdaae2439a54e856a96a11fe773b0e0a3316688ee67ae1e43e6", Pod:"csi-node-driver-xq692", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calib54471cd635", MAC:"0e:6a:1e:b7:8c:1d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 23:54:37.411527 containerd[1458]: 2025-05-14 23:54:37.406 [INFO][4415] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="3774646ffcbf8bdaae2439a54e856a96a11fe773b0e0a3316688ee67ae1e43e6" Namespace="calico-system" Pod="csi-node-driver-xq692" WorkloadEndpoint="localhost-k8s-csi--node--driver--xq692-eth0" May 14 23:54:37.418376 containerd[1458]: time="2025-05-14T23:54:37.417932428Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 14 23:54:37.418376 containerd[1458]: time="2025-05-14T23:54:37.418069305Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 14 23:54:37.418376 containerd[1458]: time="2025-05-14T23:54:37.418086625Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 23:54:37.418376 containerd[1458]: time="2025-05-14T23:54:37.418271340Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 23:54:37.431482 systemd-networkd[1388]: caliea1a75687fa: Link UP May 14 23:54:37.432899 systemd-networkd[1388]: caliea1a75687fa: Gained carrier May 14 23:54:37.448736 containerd[1458]: time="2025-05-14T23:54:37.448389976Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 14 23:54:37.448736 containerd[1458]: time="2025-05-14T23:54:37.448455455Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 14 23:54:37.448736 containerd[1458]: time="2025-05-14T23:54:37.448467534Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 23:54:37.448736 containerd[1458]: time="2025-05-14T23:54:37.448569052Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 23:54:37.448909 systemd[1]: Started cri-containerd-981630902a1cc42bcb9bfc5d6d25e83f7822d38a04ec51791ae8f48ab29e1c3b.scope - libcontainer container 981630902a1cc42bcb9bfc5d6d25e83f7822d38a04ec51791ae8f48ab29e1c3b. May 14 23:54:37.453616 systemd[1]: Started cri-containerd-40f379bf3ec8d22f663a1f59695605078b15698b8b8eb30e611adbcca1024770.scope - libcontainer container 40f379bf3ec8d22f663a1f59695605078b15698b8b8eb30e611adbcca1024770. May 14 23:54:37.458076 containerd[1458]: 2025-05-14 23:54:36.976 [INFO][4429] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 14 23:54:37.458076 containerd[1458]: 2025-05-14 23:54:37.020 [INFO][4429] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7db6d8ff4d--m9489-eth0 coredns-7db6d8ff4d- kube-system 5296ee56-ba99-4940-a157-166e05449d2c 689 0 2025-05-14 23:54:18 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7db6d8ff4d-m9489 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] caliea1a75687fa [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="4e320af53beac69f6ed03b92901708a5148f9ebcac0512d73340ea59cb5273ad" Namespace="kube-system" Pod="coredns-7db6d8ff4d-m9489" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--m9489-" May 14 23:54:37.458076 containerd[1458]: 2025-05-14 23:54:37.020 [INFO][4429] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="4e320af53beac69f6ed03b92901708a5148f9ebcac0512d73340ea59cb5273ad" Namespace="kube-system" Pod="coredns-7db6d8ff4d-m9489" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--m9489-eth0" May 14 23:54:37.458076 containerd[1458]: 2025-05-14 23:54:37.257 [INFO][4478] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4e320af53beac69f6ed03b92901708a5148f9ebcac0512d73340ea59cb5273ad" HandleID="k8s-pod-network.4e320af53beac69f6ed03b92901708a5148f9ebcac0512d73340ea59cb5273ad" Workload="localhost-k8s-coredns--7db6d8ff4d--m9489-eth0" May 14 23:54:37.458076 containerd[1458]: 2025-05-14 23:54:37.281 [INFO][4478] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4e320af53beac69f6ed03b92901708a5148f9ebcac0512d73340ea59cb5273ad" HandleID="k8s-pod-network.4e320af53beac69f6ed03b92901708a5148f9ebcac0512d73340ea59cb5273ad" Workload="localhost-k8s-coredns--7db6d8ff4d--m9489-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003be560), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7db6d8ff4d-m9489", "timestamp":"2025-05-14 23:54:37.257082014 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 14 23:54:37.458076 containerd[1458]: 2025-05-14 23:54:37.282 [INFO][4478] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 14 23:54:37.458076 containerd[1458]: 2025-05-14 23:54:37.388 [INFO][4478] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 14 23:54:37.458076 containerd[1458]: 2025-05-14 23:54:37.388 [INFO][4478] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 14 23:54:37.458076 containerd[1458]: 2025-05-14 23:54:37.391 [INFO][4478] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.4e320af53beac69f6ed03b92901708a5148f9ebcac0512d73340ea59cb5273ad" host="localhost" May 14 23:54:37.458076 containerd[1458]: 2025-05-14 23:54:37.397 [INFO][4478] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 14 23:54:37.458076 containerd[1458]: 2025-05-14 23:54:37.406 [INFO][4478] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 14 23:54:37.458076 containerd[1458]: 2025-05-14 23:54:37.408 [INFO][4478] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 14 23:54:37.458076 containerd[1458]: 2025-05-14 23:54:37.412 [INFO][4478] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 14 23:54:37.458076 containerd[1458]: 2025-05-14 23:54:37.412 [INFO][4478] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4e320af53beac69f6ed03b92901708a5148f9ebcac0512d73340ea59cb5273ad" host="localhost" May 14 23:54:37.458076 containerd[1458]: 2025-05-14 23:54:37.414 [INFO][4478] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.4e320af53beac69f6ed03b92901708a5148f9ebcac0512d73340ea59cb5273ad May 14 23:54:37.458076 containerd[1458]: 2025-05-14 23:54:37.418 [INFO][4478] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4e320af53beac69f6ed03b92901708a5148f9ebcac0512d73340ea59cb5273ad" host="localhost" May 14 23:54:37.458076 containerd[1458]: 2025-05-14 23:54:37.424 [INFO][4478] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.4e320af53beac69f6ed03b92901708a5148f9ebcac0512d73340ea59cb5273ad" host="localhost" May 14 23:54:37.458076 containerd[1458]: 2025-05-14 23:54:37.424 [INFO][4478] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.4e320af53beac69f6ed03b92901708a5148f9ebcac0512d73340ea59cb5273ad" host="localhost" May 14 23:54:37.458076 containerd[1458]: 2025-05-14 23:54:37.424 [INFO][4478] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 14 23:54:37.458076 containerd[1458]: 2025-05-14 23:54:37.424 [INFO][4478] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="4e320af53beac69f6ed03b92901708a5148f9ebcac0512d73340ea59cb5273ad" HandleID="k8s-pod-network.4e320af53beac69f6ed03b92901708a5148f9ebcac0512d73340ea59cb5273ad" Workload="localhost-k8s-coredns--7db6d8ff4d--m9489-eth0" May 14 23:54:37.458748 containerd[1458]: 2025-05-14 23:54:37.428 [INFO][4429] cni-plugin/k8s.go 386: Populated endpoint ContainerID="4e320af53beac69f6ed03b92901708a5148f9ebcac0512d73340ea59cb5273ad" Namespace="kube-system" Pod="coredns-7db6d8ff4d-m9489" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--m9489-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--m9489-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"5296ee56-ba99-4940-a157-166e05449d2c", ResourceVersion:"689", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 23, 54, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7db6d8ff4d-m9489", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliea1a75687fa", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 23:54:37.458748 containerd[1458]: 2025-05-14 23:54:37.428 [INFO][4429] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="4e320af53beac69f6ed03b92901708a5148f9ebcac0512d73340ea59cb5273ad" Namespace="kube-system" Pod="coredns-7db6d8ff4d-m9489" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--m9489-eth0" May 14 23:54:37.458748 containerd[1458]: 2025-05-14 23:54:37.428 [INFO][4429] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliea1a75687fa ContainerID="4e320af53beac69f6ed03b92901708a5148f9ebcac0512d73340ea59cb5273ad" Namespace="kube-system" Pod="coredns-7db6d8ff4d-m9489" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--m9489-eth0" May 14 23:54:37.458748 containerd[1458]: 2025-05-14 23:54:37.433 [INFO][4429] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4e320af53beac69f6ed03b92901708a5148f9ebcac0512d73340ea59cb5273ad" Namespace="kube-system" Pod="coredns-7db6d8ff4d-m9489" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--m9489-eth0" May 14 23:54:37.458748 containerd[1458]: 2025-05-14 23:54:37.437 [INFO][4429] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="4e320af53beac69f6ed03b92901708a5148f9ebcac0512d73340ea59cb5273ad" Namespace="kube-system" Pod="coredns-7db6d8ff4d-m9489" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--m9489-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--m9489-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"5296ee56-ba99-4940-a157-166e05449d2c", ResourceVersion:"689", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 23, 54, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4e320af53beac69f6ed03b92901708a5148f9ebcac0512d73340ea59cb5273ad", Pod:"coredns-7db6d8ff4d-m9489", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliea1a75687fa", MAC:"e2:53:81:6d:49:1c", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 23:54:37.458748 containerd[1458]: 2025-05-14 23:54:37.450 [INFO][4429] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="4e320af53beac69f6ed03b92901708a5148f9ebcac0512d73340ea59cb5273ad" Namespace="kube-system" Pod="coredns-7db6d8ff4d-m9489" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--m9489-eth0" May 14 23:54:37.476399 systemd[1]: Started cri-containerd-3774646ffcbf8bdaae2439a54e856a96a11fe773b0e0a3316688ee67ae1e43e6.scope - libcontainer container 3774646ffcbf8bdaae2439a54e856a96a11fe773b0e0a3316688ee67ae1e43e6. May 14 23:54:37.480885 systemd-networkd[1388]: calibe7d8fec7f8: Link UP May 14 23:54:37.482267 systemd-networkd[1388]: calibe7d8fec7f8: Gained carrier May 14 23:54:37.487625 systemd-resolved[1323]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 14 23:54:37.489933 systemd-resolved[1323]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 14 23:54:37.508372 containerd[1458]: 2025-05-14 23:54:36.997 [INFO][4433] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 14 23:54:37.508372 containerd[1458]: 2025-05-14 23:54:37.031 [INFO][4433] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--948cf5cf6--mfgt6-eth0 calico-apiserver-948cf5cf6- calico-apiserver dbd41afb-946c-4f95-a556-b712c9bfb043 696 0 2025-05-14 23:54:24 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:948cf5cf6 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-948cf5cf6-mfgt6 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calibe7d8fec7f8 [] []}} ContainerID="634d740ef36c284e7d4b512fee99c8d10986ea098c947cd889a64e88a9e1f789" Namespace="calico-apiserver" Pod="calico-apiserver-948cf5cf6-mfgt6" WorkloadEndpoint="localhost-k8s-calico--apiserver--948cf5cf6--mfgt6-" May 14 23:54:37.508372 containerd[1458]: 2025-05-14 23:54:37.031 [INFO][4433] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="634d740ef36c284e7d4b512fee99c8d10986ea098c947cd889a64e88a9e1f789" Namespace="calico-apiserver" Pod="calico-apiserver-948cf5cf6-mfgt6" WorkloadEndpoint="localhost-k8s-calico--apiserver--948cf5cf6--mfgt6-eth0" May 14 23:54:37.508372 containerd[1458]: 2025-05-14 23:54:37.259 [INFO][4500] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="634d740ef36c284e7d4b512fee99c8d10986ea098c947cd889a64e88a9e1f789" HandleID="k8s-pod-network.634d740ef36c284e7d4b512fee99c8d10986ea098c947cd889a64e88a9e1f789" Workload="localhost-k8s-calico--apiserver--948cf5cf6--mfgt6-eth0" May 14 23:54:37.508372 containerd[1458]: 2025-05-14 23:54:37.282 [INFO][4500] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="634d740ef36c284e7d4b512fee99c8d10986ea098c947cd889a64e88a9e1f789" HandleID="k8s-pod-network.634d740ef36c284e7d4b512fee99c8d10986ea098c947cd889a64e88a9e1f789" Workload="localhost-k8s-calico--apiserver--948cf5cf6--mfgt6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000632070), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-948cf5cf6-mfgt6", "timestamp":"2025-05-14 23:54:37.259790589 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 14 23:54:37.508372 containerd[1458]: 2025-05-14 23:54:37.282 [INFO][4500] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 14 23:54:37.508372 containerd[1458]: 2025-05-14 23:54:37.424 [INFO][4500] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 14 23:54:37.508372 containerd[1458]: 2025-05-14 23:54:37.424 [INFO][4500] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 14 23:54:37.508372 containerd[1458]: 2025-05-14 23:54:37.428 [INFO][4500] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.634d740ef36c284e7d4b512fee99c8d10986ea098c947cd889a64e88a9e1f789" host="localhost" May 14 23:54:37.508372 containerd[1458]: 2025-05-14 23:54:37.436 [INFO][4500] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 14 23:54:37.508372 containerd[1458]: 2025-05-14 23:54:37.446 [INFO][4500] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 14 23:54:37.508372 containerd[1458]: 2025-05-14 23:54:37.448 [INFO][4500] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 14 23:54:37.508372 containerd[1458]: 2025-05-14 23:54:37.454 [INFO][4500] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 14 23:54:37.508372 containerd[1458]: 2025-05-14 23:54:37.454 [INFO][4500] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.634d740ef36c284e7d4b512fee99c8d10986ea098c947cd889a64e88a9e1f789" host="localhost" May 14 23:54:37.508372 containerd[1458]: 2025-05-14 23:54:37.457 [INFO][4500] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.634d740ef36c284e7d4b512fee99c8d10986ea098c947cd889a64e88a9e1f789 May 14 23:54:37.508372 containerd[1458]: 2025-05-14 23:54:37.463 [INFO][4500] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.634d740ef36c284e7d4b512fee99c8d10986ea098c947cd889a64e88a9e1f789" host="localhost" May 14 23:54:37.508372 containerd[1458]: 2025-05-14 23:54:37.471 [INFO][4500] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.634d740ef36c284e7d4b512fee99c8d10986ea098c947cd889a64e88a9e1f789" host="localhost" May 14 23:54:37.508372 containerd[1458]: 2025-05-14 23:54:37.471 [INFO][4500] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.634d740ef36c284e7d4b512fee99c8d10986ea098c947cd889a64e88a9e1f789" host="localhost" May 14 23:54:37.508372 containerd[1458]: 2025-05-14 23:54:37.471 [INFO][4500] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 14 23:54:37.508372 containerd[1458]: 2025-05-14 23:54:37.471 [INFO][4500] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="634d740ef36c284e7d4b512fee99c8d10986ea098c947cd889a64e88a9e1f789" HandleID="k8s-pod-network.634d740ef36c284e7d4b512fee99c8d10986ea098c947cd889a64e88a9e1f789" Workload="localhost-k8s-calico--apiserver--948cf5cf6--mfgt6-eth0" May 14 23:54:37.509194 containerd[1458]: 2025-05-14 23:54:37.474 [INFO][4433] cni-plugin/k8s.go 386: Populated endpoint ContainerID="634d740ef36c284e7d4b512fee99c8d10986ea098c947cd889a64e88a9e1f789" Namespace="calico-apiserver" Pod="calico-apiserver-948cf5cf6-mfgt6" WorkloadEndpoint="localhost-k8s-calico--apiserver--948cf5cf6--mfgt6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--948cf5cf6--mfgt6-eth0", GenerateName:"calico-apiserver-948cf5cf6-", Namespace:"calico-apiserver", SelfLink:"", UID:"dbd41afb-946c-4f95-a556-b712c9bfb043", ResourceVersion:"696", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 23, 54, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"948cf5cf6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-948cf5cf6-mfgt6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibe7d8fec7f8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 23:54:37.509194 containerd[1458]: 2025-05-14 23:54:37.475 [INFO][4433] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="634d740ef36c284e7d4b512fee99c8d10986ea098c947cd889a64e88a9e1f789" Namespace="calico-apiserver" Pod="calico-apiserver-948cf5cf6-mfgt6" WorkloadEndpoint="localhost-k8s-calico--apiserver--948cf5cf6--mfgt6-eth0" May 14 23:54:37.509194 containerd[1458]: 2025-05-14 23:54:37.475 [INFO][4433] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibe7d8fec7f8 ContainerID="634d740ef36c284e7d4b512fee99c8d10986ea098c947cd889a64e88a9e1f789" Namespace="calico-apiserver" Pod="calico-apiserver-948cf5cf6-mfgt6" WorkloadEndpoint="localhost-k8s-calico--apiserver--948cf5cf6--mfgt6-eth0" May 14 23:54:37.509194 containerd[1458]: 2025-05-14 23:54:37.482 [INFO][4433] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="634d740ef36c284e7d4b512fee99c8d10986ea098c947cd889a64e88a9e1f789" Namespace="calico-apiserver" Pod="calico-apiserver-948cf5cf6-mfgt6" WorkloadEndpoint="localhost-k8s-calico--apiserver--948cf5cf6--mfgt6-eth0" May 14 23:54:37.509194 containerd[1458]: 2025-05-14 23:54:37.483 [INFO][4433] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="634d740ef36c284e7d4b512fee99c8d10986ea098c947cd889a64e88a9e1f789" Namespace="calico-apiserver" Pod="calico-apiserver-948cf5cf6-mfgt6" WorkloadEndpoint="localhost-k8s-calico--apiserver--948cf5cf6--mfgt6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--948cf5cf6--mfgt6-eth0", GenerateName:"calico-apiserver-948cf5cf6-", Namespace:"calico-apiserver", SelfLink:"", UID:"dbd41afb-946c-4f95-a556-b712c9bfb043", ResourceVersion:"696", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 23, 54, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"948cf5cf6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"634d740ef36c284e7d4b512fee99c8d10986ea098c947cd889a64e88a9e1f789", Pod:"calico-apiserver-948cf5cf6-mfgt6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibe7d8fec7f8", MAC:"12:9a:7d:2a:4f:4b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 23:54:37.509194 containerd[1458]: 2025-05-14 23:54:37.501 [INFO][4433] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="634d740ef36c284e7d4b512fee99c8d10986ea098c947cd889a64e88a9e1f789" Namespace="calico-apiserver" Pod="calico-apiserver-948cf5cf6-mfgt6" WorkloadEndpoint="localhost-k8s-calico--apiserver--948cf5cf6--mfgt6-eth0" May 14 23:54:37.509776 containerd[1458]: time="2025-05-14T23:54:37.508320336Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 14 23:54:37.509776 containerd[1458]: time="2025-05-14T23:54:37.508651888Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 14 23:54:37.509776 containerd[1458]: time="2025-05-14T23:54:37.508667568Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 23:54:37.509776 containerd[1458]: time="2025-05-14T23:54:37.508813444Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 23:54:37.524076 containerd[1458]: time="2025-05-14T23:54:37.523643008Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-b47sh,Uid:f03276de-018a-4c35-8ec5-8ee9060e84e4,Namespace:kube-system,Attempt:4,} returns sandbox id \"40f379bf3ec8d22f663a1f59695605078b15698b8b8eb30e611adbcca1024770\"" May 14 23:54:37.523980 systemd-resolved[1323]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 14 23:54:37.527590 systemd-networkd[1388]: cali245c953e4ab: Link UP May 14 23:54:37.528426 systemd-networkd[1388]: cali245c953e4ab: Gained carrier May 14 23:54:37.532167 containerd[1458]: time="2025-05-14T23:54:37.532124764Z" level=info msg="CreateContainer within sandbox \"40f379bf3ec8d22f663a1f59695605078b15698b8b8eb30e611adbcca1024770\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 14 23:54:37.546386 containerd[1458]: 2025-05-14 23:54:36.922 [INFO][4383] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 14 23:54:37.546386 containerd[1458]: 2025-05-14 23:54:37.018 [INFO][4383] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--948cf5cf6--nxpmq-eth0 calico-apiserver-948cf5cf6- calico-apiserver e2130025-45f8-46c5-b0f8-d9cf000b93a0 697 0 2025-05-14 23:54:24 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:948cf5cf6 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-948cf5cf6-nxpmq eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali245c953e4ab [] []}} ContainerID="e397037779f8c00cf3d83269bdbd76bbf883fe8014835571c9266ba982f27d4c" Namespace="calico-apiserver" Pod="calico-apiserver-948cf5cf6-nxpmq" WorkloadEndpoint="localhost-k8s-calico--apiserver--948cf5cf6--nxpmq-" May 14 23:54:37.546386 containerd[1458]: 2025-05-14 23:54:37.018 [INFO][4383] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="e397037779f8c00cf3d83269bdbd76bbf883fe8014835571c9266ba982f27d4c" Namespace="calico-apiserver" Pod="calico-apiserver-948cf5cf6-nxpmq" WorkloadEndpoint="localhost-k8s-calico--apiserver--948cf5cf6--nxpmq-eth0" May 14 23:54:37.546386 containerd[1458]: 2025-05-14 23:54:37.258 [INFO][4476] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e397037779f8c00cf3d83269bdbd76bbf883fe8014835571c9266ba982f27d4c" HandleID="k8s-pod-network.e397037779f8c00cf3d83269bdbd76bbf883fe8014835571c9266ba982f27d4c" Workload="localhost-k8s-calico--apiserver--948cf5cf6--nxpmq-eth0" May 14 23:54:37.546386 containerd[1458]: 2025-05-14 23:54:37.284 [INFO][4476] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e397037779f8c00cf3d83269bdbd76bbf883fe8014835571c9266ba982f27d4c" HandleID="k8s-pod-network.e397037779f8c00cf3d83269bdbd76bbf883fe8014835571c9266ba982f27d4c" Workload="localhost-k8s-calico--apiserver--948cf5cf6--nxpmq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400040e0b0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-948cf5cf6-nxpmq", "timestamp":"2025-05-14 23:54:37.258693975 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 14 23:54:37.546386 containerd[1458]: 2025-05-14 23:54:37.285 [INFO][4476] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 14 23:54:37.546386 containerd[1458]: 2025-05-14 23:54:37.471 [INFO][4476] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 14 23:54:37.546386 containerd[1458]: 2025-05-14 23:54:37.471 [INFO][4476] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 14 23:54:37.546386 containerd[1458]: 2025-05-14 23:54:37.473 [INFO][4476] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.e397037779f8c00cf3d83269bdbd76bbf883fe8014835571c9266ba982f27d4c" host="localhost" May 14 23:54:37.546386 containerd[1458]: 2025-05-14 23:54:37.481 [INFO][4476] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 14 23:54:37.546386 containerd[1458]: 2025-05-14 23:54:37.492 [INFO][4476] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 14 23:54:37.546386 containerd[1458]: 2025-05-14 23:54:37.498 [INFO][4476] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 14 23:54:37.546386 containerd[1458]: 2025-05-14 23:54:37.502 [INFO][4476] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 14 23:54:37.546386 containerd[1458]: 2025-05-14 23:54:37.502 [INFO][4476] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e397037779f8c00cf3d83269bdbd76bbf883fe8014835571c9266ba982f27d4c" host="localhost" May 14 23:54:37.546386 containerd[1458]: 2025-05-14 23:54:37.505 [INFO][4476] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.e397037779f8c00cf3d83269bdbd76bbf883fe8014835571c9266ba982f27d4c May 14 23:54:37.546386 containerd[1458]: 2025-05-14 23:54:37.510 [INFO][4476] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e397037779f8c00cf3d83269bdbd76bbf883fe8014835571c9266ba982f27d4c" host="localhost" May 14 23:54:37.546386 containerd[1458]: 2025-05-14 23:54:37.520 [INFO][4476] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.e397037779f8c00cf3d83269bdbd76bbf883fe8014835571c9266ba982f27d4c" host="localhost" May 14 23:54:37.546386 containerd[1458]: 2025-05-14 23:54:37.520 [INFO][4476] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.e397037779f8c00cf3d83269bdbd76bbf883fe8014835571c9266ba982f27d4c" host="localhost" May 14 23:54:37.546386 containerd[1458]: 2025-05-14 23:54:37.521 [INFO][4476] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 14 23:54:37.546386 containerd[1458]: 2025-05-14 23:54:37.521 [INFO][4476] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="e397037779f8c00cf3d83269bdbd76bbf883fe8014835571c9266ba982f27d4c" HandleID="k8s-pod-network.e397037779f8c00cf3d83269bdbd76bbf883fe8014835571c9266ba982f27d4c" Workload="localhost-k8s-calico--apiserver--948cf5cf6--nxpmq-eth0" May 14 23:54:37.547791 containerd[1458]: 2025-05-14 23:54:37.523 [INFO][4383] cni-plugin/k8s.go 386: Populated endpoint ContainerID="e397037779f8c00cf3d83269bdbd76bbf883fe8014835571c9266ba982f27d4c" Namespace="calico-apiserver" Pod="calico-apiserver-948cf5cf6-nxpmq" WorkloadEndpoint="localhost-k8s-calico--apiserver--948cf5cf6--nxpmq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--948cf5cf6--nxpmq-eth0", GenerateName:"calico-apiserver-948cf5cf6-", Namespace:"calico-apiserver", SelfLink:"", UID:"e2130025-45f8-46c5-b0f8-d9cf000b93a0", ResourceVersion:"697", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 23, 54, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"948cf5cf6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-948cf5cf6-nxpmq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali245c953e4ab", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 23:54:37.547791 containerd[1458]: 2025-05-14 23:54:37.523 [INFO][4383] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.134/32] ContainerID="e397037779f8c00cf3d83269bdbd76bbf883fe8014835571c9266ba982f27d4c" Namespace="calico-apiserver" Pod="calico-apiserver-948cf5cf6-nxpmq" WorkloadEndpoint="localhost-k8s-calico--apiserver--948cf5cf6--nxpmq-eth0" May 14 23:54:37.547791 containerd[1458]: 2025-05-14 23:54:37.523 [INFO][4383] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali245c953e4ab ContainerID="e397037779f8c00cf3d83269bdbd76bbf883fe8014835571c9266ba982f27d4c" Namespace="calico-apiserver" Pod="calico-apiserver-948cf5cf6-nxpmq" WorkloadEndpoint="localhost-k8s-calico--apiserver--948cf5cf6--nxpmq-eth0" May 14 23:54:37.547791 containerd[1458]: 2025-05-14 23:54:37.529 [INFO][4383] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e397037779f8c00cf3d83269bdbd76bbf883fe8014835571c9266ba982f27d4c" Namespace="calico-apiserver" Pod="calico-apiserver-948cf5cf6-nxpmq" WorkloadEndpoint="localhost-k8s-calico--apiserver--948cf5cf6--nxpmq-eth0" May 14 23:54:37.547791 containerd[1458]: 2025-05-14 23:54:37.529 [INFO][4383] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="e397037779f8c00cf3d83269bdbd76bbf883fe8014835571c9266ba982f27d4c" Namespace="calico-apiserver" Pod="calico-apiserver-948cf5cf6-nxpmq" WorkloadEndpoint="localhost-k8s-calico--apiserver--948cf5cf6--nxpmq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--948cf5cf6--nxpmq-eth0", GenerateName:"calico-apiserver-948cf5cf6-", Namespace:"calico-apiserver", SelfLink:"", UID:"e2130025-45f8-46c5-b0f8-d9cf000b93a0", ResourceVersion:"697", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 23, 54, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"948cf5cf6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e397037779f8c00cf3d83269bdbd76bbf883fe8014835571c9266ba982f27d4c", Pod:"calico-apiserver-948cf5cf6-nxpmq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali245c953e4ab", MAC:"1a:e4:51:3a:ec:e0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 23:54:37.547791 containerd[1458]: 2025-05-14 23:54:37.541 [INFO][4383] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="e397037779f8c00cf3d83269bdbd76bbf883fe8014835571c9266ba982f27d4c" Namespace="calico-apiserver" Pod="calico-apiserver-948cf5cf6-nxpmq" WorkloadEndpoint="localhost-k8s-calico--apiserver--948cf5cf6--nxpmq-eth0" May 14 23:54:37.550846 systemd[1]: Started cri-containerd-4e320af53beac69f6ed03b92901708a5148f9ebcac0512d73340ea59cb5273ad.scope - libcontainer container 4e320af53beac69f6ed03b92901708a5148f9ebcac0512d73340ea59cb5273ad. May 14 23:54:37.562932 containerd[1458]: time="2025-05-14T23:54:37.562470995Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 14 23:54:37.562932 containerd[1458]: time="2025-05-14T23:54:37.562518874Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 14 23:54:37.562932 containerd[1458]: time="2025-05-14T23:54:37.562529673Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 23:54:37.563729 containerd[1458]: time="2025-05-14T23:54:37.563472331Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 23:54:37.564047 containerd[1458]: time="2025-05-14T23:54:37.563917440Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xq692,Uid:78b8ecbd-ebea-41ed-a71c-8ddd96e45e21,Namespace:calico-system,Attempt:4,} returns sandbox id \"3774646ffcbf8bdaae2439a54e856a96a11fe773b0e0a3316688ee67ae1e43e6\"" May 14 23:54:37.566518 containerd[1458]: time="2025-05-14T23:54:37.566324742Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\"" May 14 23:54:37.569276 containerd[1458]: time="2025-05-14T23:54:37.569215433Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-55976b5db9-6xq9g,Uid:8967758b-5095-4f47-a879-8fbab63daefc,Namespace:calico-system,Attempt:4,} returns sandbox id \"981630902a1cc42bcb9bfc5d6d25e83f7822d38a04ec51791ae8f48ab29e1c3b\"" May 14 23:54:37.569306 systemd-resolved[1323]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 14 23:54:37.570338 containerd[1458]: time="2025-05-14T23:54:37.570002214Z" level=info msg="CreateContainer within sandbox \"40f379bf3ec8d22f663a1f59695605078b15698b8b8eb30e611adbcca1024770\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ac2a81e41f77b74445487dabcb9d86d4d4054015febd6ecbf4eff4a9a5fca8c6\"" May 14 23:54:37.573546 containerd[1458]: time="2025-05-14T23:54:37.572768427Z" level=info msg="StartContainer for \"ac2a81e41f77b74445487dabcb9d86d4d4054015febd6ecbf4eff4a9a5fca8c6\"" May 14 23:54:37.585964 containerd[1458]: time="2025-05-14T23:54:37.585775315Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 14 23:54:37.585964 containerd[1458]: time="2025-05-14T23:54:37.585847073Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 14 23:54:37.585964 containerd[1458]: time="2025-05-14T23:54:37.585863513Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 23:54:37.589043 containerd[1458]: time="2025-05-14T23:54:37.588891480Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 23:54:37.600753 systemd[1]: Started cri-containerd-634d740ef36c284e7d4b512fee99c8d10986ea098c947cd889a64e88a9e1f789.scope - libcontainer container 634d740ef36c284e7d4b512fee99c8d10986ea098c947cd889a64e88a9e1f789. May 14 23:54:37.602620 containerd[1458]: time="2025-05-14T23:54:37.602585151Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-m9489,Uid:5296ee56-ba99-4940-a157-166e05449d2c,Namespace:kube-system,Attempt:4,} returns sandbox id \"4e320af53beac69f6ed03b92901708a5148f9ebcac0512d73340ea59cb5273ad\"" May 14 23:54:37.607854 containerd[1458]: time="2025-05-14T23:54:37.607743547Z" level=info msg="CreateContainer within sandbox \"4e320af53beac69f6ed03b92901708a5148f9ebcac0512d73340ea59cb5273ad\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 14 23:54:37.611953 systemd[1]: Started cri-containerd-ac2a81e41f77b74445487dabcb9d86d4d4054015febd6ecbf4eff4a9a5fca8c6.scope - libcontainer container ac2a81e41f77b74445487dabcb9d86d4d4054015febd6ecbf4eff4a9a5fca8c6. May 14 23:54:37.617310 systemd[1]: Started cri-containerd-e397037779f8c00cf3d83269bdbd76bbf883fe8014835571c9266ba982f27d4c.scope - libcontainer container e397037779f8c00cf3d83269bdbd76bbf883fe8014835571c9266ba982f27d4c. May 14 23:54:37.624498 systemd-resolved[1323]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 14 23:54:37.631008 containerd[1458]: time="2025-05-14T23:54:37.630959469Z" level=info msg="CreateContainer within sandbox \"4e320af53beac69f6ed03b92901708a5148f9ebcac0512d73340ea59cb5273ad\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"850704d929e05811f0344fd345b4594aa8399c88ec09bc9712ff70eaf3d72276\"" May 14 23:54:37.631914 containerd[1458]: time="2025-05-14T23:54:37.631878887Z" level=info msg="StartContainer for \"850704d929e05811f0344fd345b4594aa8399c88ec09bc9712ff70eaf3d72276\"" May 14 23:54:37.639906 systemd-resolved[1323]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 14 23:54:37.648862 containerd[1458]: time="2025-05-14T23:54:37.648791640Z" level=info msg="StartContainer for \"ac2a81e41f77b74445487dabcb9d86d4d4054015febd6ecbf4eff4a9a5fca8c6\" returns successfully" May 14 23:54:37.656486 containerd[1458]: time="2025-05-14T23:54:37.656368378Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-948cf5cf6-mfgt6,Uid:dbd41afb-946c-4f95-a556-b712c9bfb043,Namespace:calico-apiserver,Attempt:4,} returns sandbox id \"634d740ef36c284e7d4b512fee99c8d10986ea098c947cd889a64e88a9e1f789\"" May 14 23:54:37.676665 containerd[1458]: time="2025-05-14T23:54:37.676385577Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-948cf5cf6-nxpmq,Uid:e2130025-45f8-46c5-b0f8-d9cf000b93a0,Namespace:calico-apiserver,Attempt:4,} returns sandbox id \"e397037779f8c00cf3d83269bdbd76bbf883fe8014835571c9266ba982f27d4c\"" May 14 23:54:37.679726 systemd[1]: Started cri-containerd-850704d929e05811f0344fd345b4594aa8399c88ec09bc9712ff70eaf3d72276.scope - libcontainer container 850704d929e05811f0344fd345b4594aa8399c88ec09bc9712ff70eaf3d72276. May 14 23:54:37.706697 containerd[1458]: time="2025-05-14T23:54:37.706644490Z" level=info msg="StartContainer for \"850704d929e05811f0344fd345b4594aa8399c88ec09bc9712ff70eaf3d72276\" returns successfully" May 14 23:54:37.807621 kubelet[2636]: I0514 23:54:37.807524 2636 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-m9489" podStartSLOduration=19.807507426 podStartE2EDuration="19.807507426s" podCreationTimestamp="2025-05-14 23:54:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 23:54:37.807408308 +0000 UTC m=+33.354717273" watchObservedRunningTime="2025-05-14 23:54:37.807507426 +0000 UTC m=+33.354816391" May 14 23:54:38.460170 containerd[1458]: time="2025-05-14T23:54:38.460117850Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:54:38.460700 containerd[1458]: time="2025-05-14T23:54:38.460658918Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.3: active requests=0, bytes read=7474935" May 14 23:54:38.466478 containerd[1458]: time="2025-05-14T23:54:38.466441383Z" level=info msg="ImageCreate event name:\"sha256:15faf29e8b518d846c91c15785ff89e783d356ea0f2b22826f47a556ea32645b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:54:38.468551 containerd[1458]: time="2025-05-14T23:54:38.468483656Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:54:38.469362 containerd[1458]: time="2025-05-14T23:54:38.469273397Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.3\" with image id \"sha256:15faf29e8b518d846c91c15785ff89e783d356ea0f2b22826f47a556ea32645b\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\", size \"8844117\" in 902.914535ms" May 14 23:54:38.469362 containerd[1458]: time="2025-05-14T23:54:38.469307716Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\" returns image reference \"sha256:15faf29e8b518d846c91c15785ff89e783d356ea0f2b22826f47a556ea32645b\"" May 14 23:54:38.470234 containerd[1458]: time="2025-05-14T23:54:38.470122057Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\"" May 14 23:54:38.472171 containerd[1458]: time="2025-05-14T23:54:38.472140450Z" level=info msg="CreateContainer within sandbox \"3774646ffcbf8bdaae2439a54e856a96a11fe773b0e0a3316688ee67ae1e43e6\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" May 14 23:54:38.485035 containerd[1458]: time="2025-05-14T23:54:38.484984352Z" level=info msg="CreateContainer within sandbox \"3774646ffcbf8bdaae2439a54e856a96a11fe773b0e0a3316688ee67ae1e43e6\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"53dfaf92817250d8bc246dcc2ad43d1f99263f15e13c9fb48ef7079a70cbfdf6\"" May 14 23:54:38.486568 containerd[1458]: time="2025-05-14T23:54:38.485594977Z" level=info msg="StartContainer for \"53dfaf92817250d8bc246dcc2ad43d1f99263f15e13c9fb48ef7079a70cbfdf6\"" May 14 23:54:38.514129 systemd[1]: Started cri-containerd-53dfaf92817250d8bc246dcc2ad43d1f99263f15e13c9fb48ef7079a70cbfdf6.scope - libcontainer container 53dfaf92817250d8bc246dcc2ad43d1f99263f15e13c9fb48ef7079a70cbfdf6. May 14 23:54:38.549969 containerd[1458]: time="2025-05-14T23:54:38.549914280Z" level=info msg="StartContainer for \"53dfaf92817250d8bc246dcc2ad43d1f99263f15e13c9fb48ef7079a70cbfdf6\" returns successfully" May 14 23:54:38.654695 systemd-networkd[1388]: cali0218a991330: Gained IPv6LL May 14 23:54:38.827638 kubelet[2636]: I0514 23:54:38.827039 2636 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-b47sh" podStartSLOduration=19.82702247 podStartE2EDuration="19.82702247s" podCreationTimestamp="2025-05-14 23:54:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 23:54:37.8601818 +0000 UTC m=+33.407490805" watchObservedRunningTime="2025-05-14 23:54:38.82702247 +0000 UTC m=+34.374331435" May 14 23:54:38.846774 systemd-networkd[1388]: caliea1a75687fa: Gained IPv6LL May 14 23:54:39.038740 systemd-networkd[1388]: cali245c953e4ab: Gained IPv6LL May 14 23:54:39.103575 systemd-networkd[1388]: calibe7d8fec7f8: Gained IPv6LL May 14 23:54:39.166735 systemd-networkd[1388]: cali4bf7b453527: Gained IPv6LL May 14 23:54:39.294935 systemd-networkd[1388]: calib54471cd635: Gained IPv6LL May 14 23:54:39.296073 systemd[1]: Started sshd@8-10.0.0.62:22-10.0.0.1:46864.service - OpenSSH per-connection server daemon (10.0.0.1:46864). May 14 23:54:39.347140 sshd[5128]: Accepted publickey for core from 10.0.0.1 port 46864 ssh2: RSA SHA256:smlrS2t7mLsANcNMaj8ka2nbLtrYQtOgzTJzuRdYSOU May 14 23:54:39.350362 sshd-session[5128]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:54:39.359002 systemd-logind[1440]: New session 9 of user core. May 14 23:54:39.369972 systemd[1]: Started session-9.scope - Session 9 of User core. May 14 23:54:39.541710 sshd[5130]: Connection closed by 10.0.0.1 port 46864 May 14 23:54:39.542246 sshd-session[5128]: pam_unix(sshd:session): session closed for user core May 14 23:54:39.544754 systemd[1]: sshd@8-10.0.0.62:22-10.0.0.1:46864.service: Deactivated successfully. May 14 23:54:39.547140 systemd[1]: session-9.scope: Deactivated successfully. May 14 23:54:39.548402 systemd-logind[1440]: Session 9 logged out. Waiting for processes to exit. May 14 23:54:39.549269 systemd-logind[1440]: Removed session 9. May 14 23:54:39.898218 containerd[1458]: time="2025-05-14T23:54:39.897754505Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:54:39.898704 containerd[1458]: time="2025-05-14T23:54:39.898662164Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.3: active requests=0, bytes read=32554116" May 14 23:54:39.899166 containerd[1458]: time="2025-05-14T23:54:39.899134594Z" level=info msg="ImageCreate event name:\"sha256:ec7c64189a2fd01b24b044fea1840d441e9884a0df32c2e9d6982cfbbea1f814\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:54:39.902379 containerd[1458]: time="2025-05-14T23:54:39.902300242Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:54:39.903153 containerd[1458]: time="2025-05-14T23:54:39.902980467Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" with image id \"sha256:ec7c64189a2fd01b24b044fea1840d441e9884a0df32c2e9d6982cfbbea1f814\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\", size \"33923266\" in 1.43283037s" May 14 23:54:39.903153 containerd[1458]: time="2025-05-14T23:54:39.903016946Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" returns image reference \"sha256:ec7c64189a2fd01b24b044fea1840d441e9884a0df32c2e9d6982cfbbea1f814\"" May 14 23:54:39.904602 containerd[1458]: time="2025-05-14T23:54:39.904510232Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" May 14 23:54:39.912276 containerd[1458]: time="2025-05-14T23:54:39.912188099Z" level=info msg="CreateContainer within sandbox \"981630902a1cc42bcb9bfc5d6d25e83f7822d38a04ec51791ae8f48ab29e1c3b\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" May 14 23:54:39.924571 containerd[1458]: time="2025-05-14T23:54:39.924507221Z" level=info msg="CreateContainer within sandbox \"981630902a1cc42bcb9bfc5d6d25e83f7822d38a04ec51791ae8f48ab29e1c3b\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"fb3b155cfb20957874a446e8eceb7a324c0f2f9857c03be7d4bab9caa7bc25f8\"" May 14 23:54:39.925110 containerd[1458]: time="2025-05-14T23:54:39.925083048Z" level=info msg="StartContainer for \"fb3b155cfb20957874a446e8eceb7a324c0f2f9857c03be7d4bab9caa7bc25f8\"" May 14 23:54:39.968771 systemd[1]: Started cri-containerd-fb3b155cfb20957874a446e8eceb7a324c0f2f9857c03be7d4bab9caa7bc25f8.scope - libcontainer container fb3b155cfb20957874a446e8eceb7a324c0f2f9857c03be7d4bab9caa7bc25f8. May 14 23:54:40.087553 containerd[1458]: time="2025-05-14T23:54:40.087490001Z" level=info msg="StartContainer for \"fb3b155cfb20957874a446e8eceb7a324c0f2f9857c03be7d4bab9caa7bc25f8\" returns successfully" May 14 23:54:40.848844 kubelet[2636]: I0514 23:54:40.848143 2636 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-55976b5db9-6xq9g" podStartSLOduration=13.517744464 podStartE2EDuration="15.848124302s" podCreationTimestamp="2025-05-14 23:54:25 +0000 UTC" firstStartedPulling="2025-05-14 23:54:37.573677365 +0000 UTC m=+33.120986290" lastFinishedPulling="2025-05-14 23:54:39.904057123 +0000 UTC m=+35.451366128" observedRunningTime="2025-05-14 23:54:40.846212224 +0000 UTC m=+36.393521189" watchObservedRunningTime="2025-05-14 23:54:40.848124302 +0000 UTC m=+36.395433267" May 14 23:54:41.359228 containerd[1458]: time="2025-05-14T23:54:41.359185252Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:54:41.359865 containerd[1458]: time="2025-05-14T23:54:41.359821639Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=40247603" May 14 23:54:41.360539 containerd[1458]: time="2025-05-14T23:54:41.360485225Z" level=info msg="ImageCreate event name:\"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:54:41.362621 containerd[1458]: time="2025-05-14T23:54:41.362570260Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:54:41.364026 containerd[1458]: time="2025-05-14T23:54:41.363555079Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"41616801\" in 1.459016328s" May 14 23:54:41.364026 containerd[1458]: time="2025-05-14T23:54:41.363590039Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\"" May 14 23:54:41.365751 containerd[1458]: time="2025-05-14T23:54:41.365575236Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" May 14 23:54:41.366530 containerd[1458]: time="2025-05-14T23:54:41.366493097Z" level=info msg="CreateContainer within sandbox \"634d740ef36c284e7d4b512fee99c8d10986ea098c947cd889a64e88a9e1f789\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 14 23:54:41.379452 containerd[1458]: time="2025-05-14T23:54:41.379172707Z" level=info msg="CreateContainer within sandbox \"634d740ef36c284e7d4b512fee99c8d10986ea098c947cd889a64e88a9e1f789\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"2bf60a7eca2cacf4b6f78a026d7920b3f8a1f9fc1c2952db9a72940093448854\"" May 14 23:54:41.382698 containerd[1458]: time="2025-05-14T23:54:41.382621274Z" level=info msg="StartContainer for \"2bf60a7eca2cacf4b6f78a026d7920b3f8a1f9fc1c2952db9a72940093448854\"" May 14 23:54:41.415679 systemd[1]: Started cri-containerd-2bf60a7eca2cacf4b6f78a026d7920b3f8a1f9fc1c2952db9a72940093448854.scope - libcontainer container 2bf60a7eca2cacf4b6f78a026d7920b3f8a1f9fc1c2952db9a72940093448854. May 14 23:54:41.460442 containerd[1458]: time="2025-05-14T23:54:41.460403339Z" level=info msg="StartContainer for \"2bf60a7eca2cacf4b6f78a026d7920b3f8a1f9fc1c2952db9a72940093448854\" returns successfully" May 14 23:54:41.638000 containerd[1458]: time="2025-05-14T23:54:41.637873442Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:54:41.639841 containerd[1458]: time="2025-05-14T23:54:41.638394791Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=77" May 14 23:54:41.641176 containerd[1458]: time="2025-05-14T23:54:41.641132453Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"41616801\" in 275.522337ms" May 14 23:54:41.641176 containerd[1458]: time="2025-05-14T23:54:41.641175372Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\"" May 14 23:54:41.643207 containerd[1458]: time="2025-05-14T23:54:41.642724459Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\"" May 14 23:54:41.644790 containerd[1458]: time="2025-05-14T23:54:41.644270226Z" level=info msg="CreateContainer within sandbox \"e397037779f8c00cf3d83269bdbd76bbf883fe8014835571c9266ba982f27d4c\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 14 23:54:41.654085 containerd[1458]: time="2025-05-14T23:54:41.654039538Z" level=info msg="CreateContainer within sandbox \"e397037779f8c00cf3d83269bdbd76bbf883fe8014835571c9266ba982f27d4c\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"aee88cbef46d0ad9e32d242352cce984c6e2304cd2fce9f1e6314b4e8b8b889d\"" May 14 23:54:41.654520 containerd[1458]: time="2025-05-14T23:54:41.654498569Z" level=info msg="StartContainer for \"aee88cbef46d0ad9e32d242352cce984c6e2304cd2fce9f1e6314b4e8b8b889d\"" May 14 23:54:41.682818 systemd[1]: Started cri-containerd-aee88cbef46d0ad9e32d242352cce984c6e2304cd2fce9f1e6314b4e8b8b889d.scope - libcontainer container aee88cbef46d0ad9e32d242352cce984c6e2304cd2fce9f1e6314b4e8b8b889d. May 14 23:54:41.719304 containerd[1458]: time="2025-05-14T23:54:41.719264431Z" level=info msg="StartContainer for \"aee88cbef46d0ad9e32d242352cce984c6e2304cd2fce9f1e6314b4e8b8b889d\" returns successfully" May 14 23:54:41.848725 kubelet[2636]: I0514 23:54:41.847749 2636 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-948cf5cf6-mfgt6" podStartSLOduration=14.143017333 podStartE2EDuration="17.847710097s" podCreationTimestamp="2025-05-14 23:54:24 +0000 UTC" firstStartedPulling="2025-05-14 23:54:37.660091689 +0000 UTC m=+33.207400654" lastFinishedPulling="2025-05-14 23:54:41.364784453 +0000 UTC m=+36.912093418" observedRunningTime="2025-05-14 23:54:41.847702458 +0000 UTC m=+37.395011423" watchObservedRunningTime="2025-05-14 23:54:41.847710097 +0000 UTC m=+37.395019062" May 14 23:54:42.839199 containerd[1458]: time="2025-05-14T23:54:42.838430786Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:54:42.839721 containerd[1458]: time="2025-05-14T23:54:42.839433405Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3: active requests=0, bytes read=13124299" May 14 23:54:42.840218 containerd[1458]: time="2025-05-14T23:54:42.840189990Z" level=info msg="ImageCreate event name:\"sha256:a91b1f00752edc175f270a01b33683fa80818734aa2274388785eaf3364315dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:54:42.843032 containerd[1458]: time="2025-05-14T23:54:42.843003811Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:54:42.845678 containerd[1458]: time="2025-05-14T23:54:42.845646517Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" with image id \"sha256:a91b1f00752edc175f270a01b33683fa80818734aa2274388785eaf3364315dc\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\", size \"14493433\" in 1.202891178s" May 14 23:54:42.845933 containerd[1458]: time="2025-05-14T23:54:42.845681316Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" returns image reference \"sha256:a91b1f00752edc175f270a01b33683fa80818734aa2274388785eaf3364315dc\"" May 14 23:54:42.846341 kubelet[2636]: I0514 23:54:42.846181 2636 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 14 23:54:42.846341 kubelet[2636]: I0514 23:54:42.846291 2636 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 14 23:54:42.850167 containerd[1458]: time="2025-05-14T23:54:42.850136384Z" level=info msg="CreateContainer within sandbox \"3774646ffcbf8bdaae2439a54e856a96a11fe773b0e0a3316688ee67ae1e43e6\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" May 14 23:54:42.875709 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1599219310.mount: Deactivated successfully. May 14 23:54:42.895015 containerd[1458]: time="2025-05-14T23:54:42.894965936Z" level=info msg="CreateContainer within sandbox \"3774646ffcbf8bdaae2439a54e856a96a11fe773b0e0a3316688ee67ae1e43e6\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"d8df90fba5305a5ae40b8a0e9553093d11976525959f9e511016b931c4a07b8a\"" May 14 23:54:42.895507 containerd[1458]: time="2025-05-14T23:54:42.895475806Z" level=info msg="StartContainer for \"d8df90fba5305a5ae40b8a0e9553093d11976525959f9e511016b931c4a07b8a\"" May 14 23:54:42.938744 systemd[1]: Started cri-containerd-d8df90fba5305a5ae40b8a0e9553093d11976525959f9e511016b931c4a07b8a.scope - libcontainer container d8df90fba5305a5ae40b8a0e9553093d11976525959f9e511016b931c4a07b8a. May 14 23:54:42.982553 containerd[1458]: time="2025-05-14T23:54:42.982419566Z" level=info msg="StartContainer for \"d8df90fba5305a5ae40b8a0e9553093d11976525959f9e511016b931c4a07b8a\" returns successfully" May 14 23:54:43.636686 kubelet[2636]: I0514 23:54:43.636577 2636 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 May 14 23:54:43.638311 kubelet[2636]: I0514 23:54:43.638289 2636 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock May 14 23:54:43.871898 kubelet[2636]: I0514 23:54:43.871472 2636 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-948cf5cf6-nxpmq" podStartSLOduration=15.907387272 podStartE2EDuration="19.871454366s" podCreationTimestamp="2025-05-14 23:54:24 +0000 UTC" firstStartedPulling="2025-05-14 23:54:37.677819543 +0000 UTC m=+33.225128508" lastFinishedPulling="2025-05-14 23:54:41.641886637 +0000 UTC m=+37.189195602" observedRunningTime="2025-05-14 23:54:41.858510028 +0000 UTC m=+37.405818993" watchObservedRunningTime="2025-05-14 23:54:43.871454366 +0000 UTC m=+39.418763331" May 14 23:54:44.554643 systemd[1]: Started sshd@9-10.0.0.62:22-10.0.0.1:48706.service - OpenSSH per-connection server daemon (10.0.0.1:48706). May 14 23:54:44.636955 sshd[5443]: Accepted publickey for core from 10.0.0.1 port 48706 ssh2: RSA SHA256:smlrS2t7mLsANcNMaj8ka2nbLtrYQtOgzTJzuRdYSOU May 14 23:54:44.638884 sshd-session[5443]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:54:44.644423 systemd-logind[1440]: New session 10 of user core. May 14 23:54:44.655861 systemd[1]: Started session-10.scope - Session 10 of User core. May 14 23:54:44.887413 sshd[5461]: Connection closed by 10.0.0.1 port 48706 May 14 23:54:44.887940 sshd-session[5443]: pam_unix(sshd:session): session closed for user core May 14 23:54:44.897835 systemd[1]: sshd@9-10.0.0.62:22-10.0.0.1:48706.service: Deactivated successfully. May 14 23:54:44.899524 systemd[1]: session-10.scope: Deactivated successfully. May 14 23:54:44.901127 systemd-logind[1440]: Session 10 logged out. Waiting for processes to exit. May 14 23:54:44.914881 systemd[1]: Started sshd@10-10.0.0.62:22-10.0.0.1:48722.service - OpenSSH per-connection server daemon (10.0.0.1:48722). May 14 23:54:44.915872 systemd-logind[1440]: Removed session 10. May 14 23:54:44.953899 sshd[5479]: Accepted publickey for core from 10.0.0.1 port 48722 ssh2: RSA SHA256:smlrS2t7mLsANcNMaj8ka2nbLtrYQtOgzTJzuRdYSOU May 14 23:54:44.955116 sshd-session[5479]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:54:44.959208 systemd-logind[1440]: New session 11 of user core. May 14 23:54:44.970706 systemd[1]: Started session-11.scope - Session 11 of User core. May 14 23:54:45.140129 sshd[5482]: Connection closed by 10.0.0.1 port 48722 May 14 23:54:45.141445 sshd-session[5479]: pam_unix(sshd:session): session closed for user core May 14 23:54:45.156649 systemd[1]: sshd@10-10.0.0.62:22-10.0.0.1:48722.service: Deactivated successfully. May 14 23:54:45.160234 systemd[1]: session-11.scope: Deactivated successfully. May 14 23:54:45.162599 systemd-logind[1440]: Session 11 logged out. Waiting for processes to exit. May 14 23:54:45.167957 systemd[1]: Started sshd@11-10.0.0.62:22-10.0.0.1:48726.service - OpenSSH per-connection server daemon (10.0.0.1:48726). May 14 23:54:45.169467 systemd-logind[1440]: Removed session 11. May 14 23:54:45.207499 sshd[5494]: Accepted publickey for core from 10.0.0.1 port 48726 ssh2: RSA SHA256:smlrS2t7mLsANcNMaj8ka2nbLtrYQtOgzTJzuRdYSOU May 14 23:54:45.209485 sshd-session[5494]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:54:45.214769 systemd-logind[1440]: New session 12 of user core. May 14 23:54:45.222696 systemd[1]: Started session-12.scope - Session 12 of User core. May 14 23:54:45.367465 sshd[5497]: Connection closed by 10.0.0.1 port 48726 May 14 23:54:45.367833 sshd-session[5494]: pam_unix(sshd:session): session closed for user core May 14 23:54:45.372052 systemd[1]: sshd@11-10.0.0.62:22-10.0.0.1:48726.service: Deactivated successfully. May 14 23:54:45.373858 systemd[1]: session-12.scope: Deactivated successfully. May 14 23:54:45.374903 systemd-logind[1440]: Session 12 logged out. Waiting for processes to exit. May 14 23:54:45.379644 systemd-logind[1440]: Removed session 12. May 14 23:54:45.898662 kubelet[2636]: I0514 23:54:45.898606 2636 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 14 23:54:45.912201 kubelet[2636]: I0514 23:54:45.911956 2636 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-xq692" podStartSLOduration=15.629930479 podStartE2EDuration="20.911940997s" podCreationTimestamp="2025-05-14 23:54:25 +0000 UTC" firstStartedPulling="2025-05-14 23:54:37.566045349 +0000 UTC m=+33.113354274" lastFinishedPulling="2025-05-14 23:54:42.848055827 +0000 UTC m=+38.395364792" observedRunningTime="2025-05-14 23:54:43.874673821 +0000 UTC m=+39.421982786" watchObservedRunningTime="2025-05-14 23:54:45.911940997 +0000 UTC m=+41.459249922" May 14 23:54:46.334584 kernel: bpftool[5552]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set May 14 23:54:46.501697 systemd-networkd[1388]: vxlan.calico: Link UP May 14 23:54:46.501705 systemd-networkd[1388]: vxlan.calico: Gained carrier May 14 23:54:47.614755 systemd-networkd[1388]: vxlan.calico: Gained IPv6LL May 14 23:54:48.911140 kubelet[2636]: I0514 23:54:48.911091 2636 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 14 23:54:50.382118 systemd[1]: Started sshd@12-10.0.0.62:22-10.0.0.1:48740.service - OpenSSH per-connection server daemon (10.0.0.1:48740). May 14 23:54:50.439847 sshd[5684]: Accepted publickey for core from 10.0.0.1 port 48740 ssh2: RSA SHA256:smlrS2t7mLsANcNMaj8ka2nbLtrYQtOgzTJzuRdYSOU May 14 23:54:50.441385 sshd-session[5684]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:54:50.446471 systemd-logind[1440]: New session 13 of user core. May 14 23:54:50.455729 systemd[1]: Started session-13.scope - Session 13 of User core. May 14 23:54:50.688375 sshd[5686]: Connection closed by 10.0.0.1 port 48740 May 14 23:54:50.688650 sshd-session[5684]: pam_unix(sshd:session): session closed for user core May 14 23:54:50.700957 systemd[1]: sshd@12-10.0.0.62:22-10.0.0.1:48740.service: Deactivated successfully. May 14 23:54:50.703922 systemd[1]: session-13.scope: Deactivated successfully. May 14 23:54:50.705145 systemd-logind[1440]: Session 13 logged out. Waiting for processes to exit. May 14 23:54:50.706378 systemd[1]: Started sshd@13-10.0.0.62:22-10.0.0.1:48742.service - OpenSSH per-connection server daemon (10.0.0.1:48742). May 14 23:54:50.707271 systemd-logind[1440]: Removed session 13. May 14 23:54:50.751302 sshd[5698]: Accepted publickey for core from 10.0.0.1 port 48742 ssh2: RSA SHA256:smlrS2t7mLsANcNMaj8ka2nbLtrYQtOgzTJzuRdYSOU May 14 23:54:50.752736 sshd-session[5698]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:54:50.762419 systemd-logind[1440]: New session 14 of user core. May 14 23:54:50.774022 systemd[1]: Started session-14.scope - Session 14 of User core. May 14 23:54:51.040002 sshd[5701]: Connection closed by 10.0.0.1 port 48742 May 14 23:54:51.040570 sshd-session[5698]: pam_unix(sshd:session): session closed for user core May 14 23:54:51.053854 systemd[1]: sshd@13-10.0.0.62:22-10.0.0.1:48742.service: Deactivated successfully. May 14 23:54:51.055475 systemd[1]: session-14.scope: Deactivated successfully. May 14 23:54:51.057930 systemd-logind[1440]: Session 14 logged out. Waiting for processes to exit. May 14 23:54:51.072895 systemd[1]: Started sshd@14-10.0.0.62:22-10.0.0.1:48752.service - OpenSSH per-connection server daemon (10.0.0.1:48752). May 14 23:54:51.073957 systemd-logind[1440]: Removed session 14. May 14 23:54:51.116128 sshd[5711]: Accepted publickey for core from 10.0.0.1 port 48752 ssh2: RSA SHA256:smlrS2t7mLsANcNMaj8ka2nbLtrYQtOgzTJzuRdYSOU May 14 23:54:51.117662 sshd-session[5711]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:54:51.121402 systemd-logind[1440]: New session 15 of user core. May 14 23:54:51.131749 systemd[1]: Started session-15.scope - Session 15 of User core. May 14 23:54:52.639364 sshd[5714]: Connection closed by 10.0.0.1 port 48752 May 14 23:54:52.641518 sshd-session[5711]: pam_unix(sshd:session): session closed for user core May 14 23:54:52.654359 systemd[1]: sshd@14-10.0.0.62:22-10.0.0.1:48752.service: Deactivated successfully. May 14 23:54:52.659822 systemd[1]: session-15.scope: Deactivated successfully. May 14 23:54:52.663248 systemd-logind[1440]: Session 15 logged out. Waiting for processes to exit. May 14 23:54:52.671066 systemd[1]: Started sshd@15-10.0.0.62:22-10.0.0.1:46510.service - OpenSSH per-connection server daemon (10.0.0.1:46510). May 14 23:54:52.673032 systemd-logind[1440]: Removed session 15. May 14 23:54:52.720589 sshd[5742]: Accepted publickey for core from 10.0.0.1 port 46510 ssh2: RSA SHA256:smlrS2t7mLsANcNMaj8ka2nbLtrYQtOgzTJzuRdYSOU May 14 23:54:52.722019 sshd-session[5742]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:54:52.726856 systemd-logind[1440]: New session 16 of user core. May 14 23:54:52.736708 systemd[1]: Started session-16.scope - Session 16 of User core. May 14 23:54:53.085742 sshd[5745]: Connection closed by 10.0.0.1 port 46510 May 14 23:54:53.086971 sshd-session[5742]: pam_unix(sshd:session): session closed for user core May 14 23:54:53.100561 systemd[1]: sshd@15-10.0.0.62:22-10.0.0.1:46510.service: Deactivated successfully. May 14 23:54:53.104943 systemd[1]: session-16.scope: Deactivated successfully. May 14 23:54:53.108656 systemd-logind[1440]: Session 16 logged out. Waiting for processes to exit. May 14 23:54:53.121942 systemd[1]: Started sshd@16-10.0.0.62:22-10.0.0.1:46514.service - OpenSSH per-connection server daemon (10.0.0.1:46514). May 14 23:54:53.123002 systemd-logind[1440]: Removed session 16. May 14 23:54:53.161815 sshd[5755]: Accepted publickey for core from 10.0.0.1 port 46514 ssh2: RSA SHA256:smlrS2t7mLsANcNMaj8ka2nbLtrYQtOgzTJzuRdYSOU May 14 23:54:53.163190 sshd-session[5755]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:54:53.167250 systemd-logind[1440]: New session 17 of user core. May 14 23:54:53.177761 systemd[1]: Started session-17.scope - Session 17 of User core. May 14 23:54:53.324557 sshd[5758]: Connection closed by 10.0.0.1 port 46514 May 14 23:54:53.325129 sshd-session[5755]: pam_unix(sshd:session): session closed for user core May 14 23:54:53.329067 systemd[1]: sshd@16-10.0.0.62:22-10.0.0.1:46514.service: Deactivated successfully. May 14 23:54:53.332363 systemd[1]: session-17.scope: Deactivated successfully. May 14 23:54:53.333952 systemd-logind[1440]: Session 17 logged out. Waiting for processes to exit. May 14 23:54:53.335215 systemd-logind[1440]: Removed session 17. May 14 23:54:57.675034 kubelet[2636]: I0514 23:54:57.674856 2636 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 14 23:54:58.337180 systemd[1]: Started sshd@17-10.0.0.62:22-10.0.0.1:46516.service - OpenSSH per-connection server daemon (10.0.0.1:46516). May 14 23:54:58.387649 sshd[5784]: Accepted publickey for core from 10.0.0.1 port 46516 ssh2: RSA SHA256:smlrS2t7mLsANcNMaj8ka2nbLtrYQtOgzTJzuRdYSOU May 14 23:54:58.388943 sshd-session[5784]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:54:58.392888 systemd-logind[1440]: New session 18 of user core. May 14 23:54:58.405695 systemd[1]: Started session-18.scope - Session 18 of User core. May 14 23:54:58.561080 sshd[5786]: Connection closed by 10.0.0.1 port 46516 May 14 23:54:58.561699 sshd-session[5784]: pam_unix(sshd:session): session closed for user core May 14 23:54:58.564960 systemd[1]: sshd@17-10.0.0.62:22-10.0.0.1:46516.service: Deactivated successfully. May 14 23:54:58.566920 systemd[1]: session-18.scope: Deactivated successfully. May 14 23:54:58.568726 systemd-logind[1440]: Session 18 logged out. Waiting for processes to exit. May 14 23:54:58.569883 systemd-logind[1440]: Removed session 18. May 14 23:55:03.577267 systemd[1]: Started sshd@18-10.0.0.62:22-10.0.0.1:39776.service - OpenSSH per-connection server daemon (10.0.0.1:39776). May 14 23:55:03.640040 sshd[5819]: Accepted publickey for core from 10.0.0.1 port 39776 ssh2: RSA SHA256:smlrS2t7mLsANcNMaj8ka2nbLtrYQtOgzTJzuRdYSOU May 14 23:55:03.641490 sshd-session[5819]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:55:03.647590 systemd-logind[1440]: New session 19 of user core. May 14 23:55:03.657744 systemd[1]: Started session-19.scope - Session 19 of User core. May 14 23:55:03.883396 sshd[5821]: Connection closed by 10.0.0.1 port 39776 May 14 23:55:03.883890 sshd-session[5819]: pam_unix(sshd:session): session closed for user core May 14 23:55:03.887350 systemd[1]: sshd@18-10.0.0.62:22-10.0.0.1:39776.service: Deactivated successfully. May 14 23:55:03.889156 systemd[1]: session-19.scope: Deactivated successfully. May 14 23:55:03.891026 systemd-logind[1440]: Session 19 logged out. Waiting for processes to exit. May 14 23:55:03.891953 systemd-logind[1440]: Removed session 19. May 14 23:55:04.535042 containerd[1458]: time="2025-05-14T23:55:04.534992702Z" level=info msg="StopPodSandbox for \"411442b62549cd703144cf787d59ef4468de0c18b262472a2ba1f8b0af7ab32f\"" May 14 23:55:04.535469 containerd[1458]: time="2025-05-14T23:55:04.535107740Z" level=info msg="TearDown network for sandbox \"411442b62549cd703144cf787d59ef4468de0c18b262472a2ba1f8b0af7ab32f\" successfully" May 14 23:55:04.535469 containerd[1458]: time="2025-05-14T23:55:04.535119540Z" level=info msg="StopPodSandbox for \"411442b62549cd703144cf787d59ef4468de0c18b262472a2ba1f8b0af7ab32f\" returns successfully" May 14 23:55:04.536170 containerd[1458]: time="2025-05-14T23:55:04.535902729Z" level=info msg="RemovePodSandbox for \"411442b62549cd703144cf787d59ef4468de0c18b262472a2ba1f8b0af7ab32f\"" May 14 23:55:04.544911 containerd[1458]: time="2025-05-14T23:55:04.544854723Z" level=info msg="Forcibly stopping sandbox \"411442b62549cd703144cf787d59ef4468de0c18b262472a2ba1f8b0af7ab32f\"" May 14 23:55:04.545030 containerd[1458]: time="2025-05-14T23:55:04.544982721Z" level=info msg="TearDown network for sandbox \"411442b62549cd703144cf787d59ef4468de0c18b262472a2ba1f8b0af7ab32f\" successfully" May 14 23:55:04.561341 containerd[1458]: time="2025-05-14T23:55:04.561065456Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"411442b62549cd703144cf787d59ef4468de0c18b262472a2ba1f8b0af7ab32f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 14 23:55:04.561341 containerd[1458]: time="2025-05-14T23:55:04.561213493Z" level=info msg="RemovePodSandbox \"411442b62549cd703144cf787d59ef4468de0c18b262472a2ba1f8b0af7ab32f\" returns successfully" May 14 23:55:04.562175 containerd[1458]: time="2025-05-14T23:55:04.562149720Z" level=info msg="StopPodSandbox for \"a7cff0ac920af035be360364ad9f3490cfa6439ba3727ac87d628b930665cdb4\"" May 14 23:55:04.562314 containerd[1458]: time="2025-05-14T23:55:04.562294438Z" level=info msg="TearDown network for sandbox \"a7cff0ac920af035be360364ad9f3490cfa6439ba3727ac87d628b930665cdb4\" successfully" May 14 23:55:04.562314 containerd[1458]: time="2025-05-14T23:55:04.562313438Z" level=info msg="StopPodSandbox for \"a7cff0ac920af035be360364ad9f3490cfa6439ba3727ac87d628b930665cdb4\" returns successfully" May 14 23:55:04.562672 containerd[1458]: time="2025-05-14T23:55:04.562642593Z" level=info msg="RemovePodSandbox for \"a7cff0ac920af035be360364ad9f3490cfa6439ba3727ac87d628b930665cdb4\"" May 14 23:55:04.562672 containerd[1458]: time="2025-05-14T23:55:04.562670953Z" level=info msg="Forcibly stopping sandbox \"a7cff0ac920af035be360364ad9f3490cfa6439ba3727ac87d628b930665cdb4\"" May 14 23:55:04.562764 containerd[1458]: time="2025-05-14T23:55:04.562739312Z" level=info msg="TearDown network for sandbox \"a7cff0ac920af035be360364ad9f3490cfa6439ba3727ac87d628b930665cdb4\" successfully" May 14 23:55:04.565366 containerd[1458]: time="2025-05-14T23:55:04.565334516Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a7cff0ac920af035be360364ad9f3490cfa6439ba3727ac87d628b930665cdb4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 14 23:55:04.565413 containerd[1458]: time="2025-05-14T23:55:04.565391195Z" level=info msg="RemovePodSandbox \"a7cff0ac920af035be360364ad9f3490cfa6439ba3727ac87d628b930665cdb4\" returns successfully" May 14 23:55:04.565810 containerd[1458]: time="2025-05-14T23:55:04.565788709Z" level=info msg="StopPodSandbox for \"47a87353b700cc5924d1a24d563b41468d8b4cd51a69dd30b4c6101bd99384f1\"" May 14 23:55:04.565897 containerd[1458]: time="2025-05-14T23:55:04.565880148Z" level=info msg="TearDown network for sandbox \"47a87353b700cc5924d1a24d563b41468d8b4cd51a69dd30b4c6101bd99384f1\" successfully" May 14 23:55:04.565927 containerd[1458]: time="2025-05-14T23:55:04.565894908Z" level=info msg="StopPodSandbox for \"47a87353b700cc5924d1a24d563b41468d8b4cd51a69dd30b4c6101bd99384f1\" returns successfully" May 14 23:55:04.566281 containerd[1458]: time="2025-05-14T23:55:04.566215063Z" level=info msg="RemovePodSandbox for \"47a87353b700cc5924d1a24d563b41468d8b4cd51a69dd30b4c6101bd99384f1\"" May 14 23:55:04.566378 containerd[1458]: time="2025-05-14T23:55:04.566268182Z" level=info msg="Forcibly stopping sandbox \"47a87353b700cc5924d1a24d563b41468d8b4cd51a69dd30b4c6101bd99384f1\"" May 14 23:55:04.566464 containerd[1458]: time="2025-05-14T23:55:04.566447580Z" level=info msg="TearDown network for sandbox \"47a87353b700cc5924d1a24d563b41468d8b4cd51a69dd30b4c6101bd99384f1\" successfully" May 14 23:55:04.569256 containerd[1458]: time="2025-05-14T23:55:04.569208661Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"47a87353b700cc5924d1a24d563b41468d8b4cd51a69dd30b4c6101bd99384f1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 14 23:55:04.569309 containerd[1458]: time="2025-05-14T23:55:04.569268780Z" level=info msg="RemovePodSandbox \"47a87353b700cc5924d1a24d563b41468d8b4cd51a69dd30b4c6101bd99384f1\" returns successfully" May 14 23:55:04.569739 containerd[1458]: time="2025-05-14T23:55:04.569703294Z" level=info msg="StopPodSandbox for \"9eecfec659f3de06d2fe478adbb1bd614221f45bc96ed87dda589f41a8750028\"" May 14 23:55:04.569816 containerd[1458]: time="2025-05-14T23:55:04.569793213Z" level=info msg="TearDown network for sandbox \"9eecfec659f3de06d2fe478adbb1bd614221f45bc96ed87dda589f41a8750028\" successfully" May 14 23:55:04.569816 containerd[1458]: time="2025-05-14T23:55:04.569810493Z" level=info msg="StopPodSandbox for \"9eecfec659f3de06d2fe478adbb1bd614221f45bc96ed87dda589f41a8750028\" returns successfully" May 14 23:55:04.570179 containerd[1458]: time="2025-05-14T23:55:04.570108888Z" level=info msg="RemovePodSandbox for \"9eecfec659f3de06d2fe478adbb1bd614221f45bc96ed87dda589f41a8750028\"" May 14 23:55:04.570179 containerd[1458]: time="2025-05-14T23:55:04.570177367Z" level=info msg="Forcibly stopping sandbox \"9eecfec659f3de06d2fe478adbb1bd614221f45bc96ed87dda589f41a8750028\"" May 14 23:55:04.570275 containerd[1458]: time="2025-05-14T23:55:04.570258526Z" level=info msg="TearDown network for sandbox \"9eecfec659f3de06d2fe478adbb1bd614221f45bc96ed87dda589f41a8750028\" successfully" May 14 23:55:04.572861 containerd[1458]: time="2025-05-14T23:55:04.572822730Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9eecfec659f3de06d2fe478adbb1bd614221f45bc96ed87dda589f41a8750028\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 14 23:55:04.572900 containerd[1458]: time="2025-05-14T23:55:04.572882169Z" level=info msg="RemovePodSandbox \"9eecfec659f3de06d2fe478adbb1bd614221f45bc96ed87dda589f41a8750028\" returns successfully" May 14 23:55:04.573303 containerd[1458]: time="2025-05-14T23:55:04.573276324Z" level=info msg="StopPodSandbox for \"7baa3219b9d67adfc4dc62350bf1c4169d2538001b9b6678280842b71c445e26\"" May 14 23:55:04.573397 containerd[1458]: time="2025-05-14T23:55:04.573376523Z" level=info msg="TearDown network for sandbox \"7baa3219b9d67adfc4dc62350bf1c4169d2538001b9b6678280842b71c445e26\" successfully" May 14 23:55:04.573397 containerd[1458]: time="2025-05-14T23:55:04.573393562Z" level=info msg="StopPodSandbox for \"7baa3219b9d67adfc4dc62350bf1c4169d2538001b9b6678280842b71c445e26\" returns successfully" May 14 23:55:04.573738 containerd[1458]: time="2025-05-14T23:55:04.573713198Z" level=info msg="RemovePodSandbox for \"7baa3219b9d67adfc4dc62350bf1c4169d2538001b9b6678280842b71c445e26\"" May 14 23:55:04.573786 containerd[1458]: time="2025-05-14T23:55:04.573742197Z" level=info msg="Forcibly stopping sandbox \"7baa3219b9d67adfc4dc62350bf1c4169d2538001b9b6678280842b71c445e26\"" May 14 23:55:04.573822 containerd[1458]: time="2025-05-14T23:55:04.573807196Z" level=info msg="TearDown network for sandbox \"7baa3219b9d67adfc4dc62350bf1c4169d2538001b9b6678280842b71c445e26\" successfully" May 14 23:55:04.576157 containerd[1458]: time="2025-05-14T23:55:04.576113204Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7baa3219b9d67adfc4dc62350bf1c4169d2538001b9b6678280842b71c445e26\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 14 23:55:04.576216 containerd[1458]: time="2025-05-14T23:55:04.576170283Z" level=info msg="RemovePodSandbox \"7baa3219b9d67adfc4dc62350bf1c4169d2538001b9b6678280842b71c445e26\" returns successfully" May 14 23:55:04.579568 containerd[1458]: time="2025-05-14T23:55:04.577741981Z" level=info msg="StopPodSandbox for \"fb6ec1ae0429d0a14ae769b7e263fe479f5e616c184463c2a1dcbd8adbdf81d4\"" May 14 23:55:04.579568 containerd[1458]: time="2025-05-14T23:55:04.577838300Z" level=info msg="TearDown network for sandbox \"fb6ec1ae0429d0a14ae769b7e263fe479f5e616c184463c2a1dcbd8adbdf81d4\" successfully" May 14 23:55:04.579568 containerd[1458]: time="2025-05-14T23:55:04.577850500Z" level=info msg="StopPodSandbox for \"fb6ec1ae0429d0a14ae769b7e263fe479f5e616c184463c2a1dcbd8adbdf81d4\" returns successfully" May 14 23:55:04.579568 containerd[1458]: time="2025-05-14T23:55:04.578159095Z" level=info msg="RemovePodSandbox for \"fb6ec1ae0429d0a14ae769b7e263fe479f5e616c184463c2a1dcbd8adbdf81d4\"" May 14 23:55:04.579568 containerd[1458]: time="2025-05-14T23:55:04.578185775Z" level=info msg="Forcibly stopping sandbox \"fb6ec1ae0429d0a14ae769b7e263fe479f5e616c184463c2a1dcbd8adbdf81d4\"" May 14 23:55:04.579568 containerd[1458]: time="2025-05-14T23:55:04.578247654Z" level=info msg="TearDown network for sandbox \"fb6ec1ae0429d0a14ae769b7e263fe479f5e616c184463c2a1dcbd8adbdf81d4\" successfully" May 14 23:55:04.580675 containerd[1458]: time="2025-05-14T23:55:04.580638621Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fb6ec1ae0429d0a14ae769b7e263fe479f5e616c184463c2a1dcbd8adbdf81d4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 14 23:55:04.580748 containerd[1458]: time="2025-05-14T23:55:04.580696500Z" level=info msg="RemovePodSandbox \"fb6ec1ae0429d0a14ae769b7e263fe479f5e616c184463c2a1dcbd8adbdf81d4\" returns successfully" May 14 23:55:04.581108 containerd[1458]: time="2025-05-14T23:55:04.581083734Z" level=info msg="StopPodSandbox for \"b44bd98b2f5b55a411d3b6eaa96fa5c4f7845d240ab9151a0da7a9f9ef7ca4de\"" May 14 23:55:04.581227 containerd[1458]: time="2025-05-14T23:55:04.581170133Z" level=info msg="TearDown network for sandbox \"b44bd98b2f5b55a411d3b6eaa96fa5c4f7845d240ab9151a0da7a9f9ef7ca4de\" successfully" May 14 23:55:04.581227 containerd[1458]: time="2025-05-14T23:55:04.581181173Z" level=info msg="StopPodSandbox for \"b44bd98b2f5b55a411d3b6eaa96fa5c4f7845d240ab9151a0da7a9f9ef7ca4de\" returns successfully" May 14 23:55:04.581556 containerd[1458]: time="2025-05-14T23:55:04.581514968Z" level=info msg="RemovePodSandbox for \"b44bd98b2f5b55a411d3b6eaa96fa5c4f7845d240ab9151a0da7a9f9ef7ca4de\"" May 14 23:55:04.581594 containerd[1458]: time="2025-05-14T23:55:04.581559968Z" level=info msg="Forcibly stopping sandbox \"b44bd98b2f5b55a411d3b6eaa96fa5c4f7845d240ab9151a0da7a9f9ef7ca4de\"" May 14 23:55:04.581644 containerd[1458]: time="2025-05-14T23:55:04.581628927Z" level=info msg="TearDown network for sandbox \"b44bd98b2f5b55a411d3b6eaa96fa5c4f7845d240ab9151a0da7a9f9ef7ca4de\" successfully" May 14 23:55:04.584018 containerd[1458]: time="2025-05-14T23:55:04.583976974Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b44bd98b2f5b55a411d3b6eaa96fa5c4f7845d240ab9151a0da7a9f9ef7ca4de\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 14 23:55:04.584072 containerd[1458]: time="2025-05-14T23:55:04.584035253Z" level=info msg="RemovePodSandbox \"b44bd98b2f5b55a411d3b6eaa96fa5c4f7845d240ab9151a0da7a9f9ef7ca4de\" returns successfully" May 14 23:55:04.584455 containerd[1458]: time="2025-05-14T23:55:04.584407328Z" level=info msg="StopPodSandbox for \"2a18df2ef627ce72b669e8188106446d588a1b880c17928bb92e8d6b23f8e9a9\"" May 14 23:55:04.584529 containerd[1458]: time="2025-05-14T23:55:04.584510606Z" level=info msg="TearDown network for sandbox \"2a18df2ef627ce72b669e8188106446d588a1b880c17928bb92e8d6b23f8e9a9\" successfully" May 14 23:55:04.584529 containerd[1458]: time="2025-05-14T23:55:04.584521726Z" level=info msg="StopPodSandbox for \"2a18df2ef627ce72b669e8188106446d588a1b880c17928bb92e8d6b23f8e9a9\" returns successfully" May 14 23:55:04.584986 containerd[1458]: time="2025-05-14T23:55:04.584949400Z" level=info msg="RemovePodSandbox for \"2a18df2ef627ce72b669e8188106446d588a1b880c17928bb92e8d6b23f8e9a9\"" May 14 23:55:04.584986 containerd[1458]: time="2025-05-14T23:55:04.584981360Z" level=info msg="Forcibly stopping sandbox \"2a18df2ef627ce72b669e8188106446d588a1b880c17928bb92e8d6b23f8e9a9\"" May 14 23:55:04.585068 containerd[1458]: time="2025-05-14T23:55:04.585053798Z" level=info msg="TearDown network for sandbox \"2a18df2ef627ce72b669e8188106446d588a1b880c17928bb92e8d6b23f8e9a9\" successfully" May 14 23:55:04.609458 containerd[1458]: time="2025-05-14T23:55:04.609394817Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2a18df2ef627ce72b669e8188106446d588a1b880c17928bb92e8d6b23f8e9a9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 14 23:55:04.609579 containerd[1458]: time="2025-05-14T23:55:04.609469735Z" level=info msg="RemovePodSandbox \"2a18df2ef627ce72b669e8188106446d588a1b880c17928bb92e8d6b23f8e9a9\" returns successfully" May 14 23:55:04.609957 containerd[1458]: time="2025-05-14T23:55:04.609929929Z" level=info msg="StopPodSandbox for \"2acec14f0cbe05c7aa548bdcd1bc51128c3959fd460322326d86d668c75644a6\"" May 14 23:55:04.610041 containerd[1458]: time="2025-05-14T23:55:04.610026288Z" level=info msg="TearDown network for sandbox \"2acec14f0cbe05c7aa548bdcd1bc51128c3959fd460322326d86d668c75644a6\" successfully" May 14 23:55:04.610094 containerd[1458]: time="2025-05-14T23:55:04.610040367Z" level=info msg="StopPodSandbox for \"2acec14f0cbe05c7aa548bdcd1bc51128c3959fd460322326d86d668c75644a6\" returns successfully" May 14 23:55:04.610317 containerd[1458]: time="2025-05-14T23:55:04.610284644Z" level=info msg="RemovePodSandbox for \"2acec14f0cbe05c7aa548bdcd1bc51128c3959fd460322326d86d668c75644a6\"" May 14 23:55:04.610353 containerd[1458]: time="2025-05-14T23:55:04.610321044Z" level=info msg="Forcibly stopping sandbox \"2acec14f0cbe05c7aa548bdcd1bc51128c3959fd460322326d86d668c75644a6\"" May 14 23:55:04.610398 containerd[1458]: time="2025-05-14T23:55:04.610375923Z" level=info msg="TearDown network for sandbox \"2acec14f0cbe05c7aa548bdcd1bc51128c3959fd460322326d86d668c75644a6\" successfully" May 14 23:55:04.613204 containerd[1458]: time="2025-05-14T23:55:04.613159004Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2acec14f0cbe05c7aa548bdcd1bc51128c3959fd460322326d86d668c75644a6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 14 23:55:04.613290 containerd[1458]: time="2025-05-14T23:55:04.613217763Z" level=info msg="RemovePodSandbox \"2acec14f0cbe05c7aa548bdcd1bc51128c3959fd460322326d86d668c75644a6\" returns successfully" May 14 23:55:04.613708 containerd[1458]: time="2025-05-14T23:55:04.613675596Z" level=info msg="StopPodSandbox for \"4e30068388552149fd1dd8a8bcbdeae3234ce1cf01252084cb9fd7e718ae0e43\"" May 14 23:55:04.613784 containerd[1458]: time="2025-05-14T23:55:04.613763275Z" level=info msg="TearDown network for sandbox \"4e30068388552149fd1dd8a8bcbdeae3234ce1cf01252084cb9fd7e718ae0e43\" successfully" May 14 23:55:04.613784 containerd[1458]: time="2025-05-14T23:55:04.613776835Z" level=info msg="StopPodSandbox for \"4e30068388552149fd1dd8a8bcbdeae3234ce1cf01252084cb9fd7e718ae0e43\" returns successfully" May 14 23:55:04.614193 containerd[1458]: time="2025-05-14T23:55:04.614141430Z" level=info msg="RemovePodSandbox for \"4e30068388552149fd1dd8a8bcbdeae3234ce1cf01252084cb9fd7e718ae0e43\"" May 14 23:55:04.614243 containerd[1458]: time="2025-05-14T23:55:04.614184829Z" level=info msg="Forcibly stopping sandbox \"4e30068388552149fd1dd8a8bcbdeae3234ce1cf01252084cb9fd7e718ae0e43\"" May 14 23:55:04.614314 containerd[1458]: time="2025-05-14T23:55:04.614298708Z" level=info msg="TearDown network for sandbox \"4e30068388552149fd1dd8a8bcbdeae3234ce1cf01252084cb9fd7e718ae0e43\" successfully" May 14 23:55:04.616966 containerd[1458]: time="2025-05-14T23:55:04.616922791Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4e30068388552149fd1dd8a8bcbdeae3234ce1cf01252084cb9fd7e718ae0e43\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 14 23:55:04.617020 containerd[1458]: time="2025-05-14T23:55:04.616986030Z" level=info msg="RemovePodSandbox \"4e30068388552149fd1dd8a8bcbdeae3234ce1cf01252084cb9fd7e718ae0e43\" returns successfully" May 14 23:55:04.617341 containerd[1458]: time="2025-05-14T23:55:04.617311225Z" level=info msg="StopPodSandbox for \"cc6f049b3aae03c0fec0542f838c43b74900d5312469ce9addfb3b82fd22b201\"" May 14 23:55:04.617415 containerd[1458]: time="2025-05-14T23:55:04.617399424Z" level=info msg="TearDown network for sandbox \"cc6f049b3aae03c0fec0542f838c43b74900d5312469ce9addfb3b82fd22b201\" successfully" May 14 23:55:04.617415 containerd[1458]: time="2025-05-14T23:55:04.617413344Z" level=info msg="StopPodSandbox for \"cc6f049b3aae03c0fec0542f838c43b74900d5312469ce9addfb3b82fd22b201\" returns successfully" May 14 23:55:04.617906 containerd[1458]: time="2025-05-14T23:55:04.617865858Z" level=info msg="RemovePodSandbox for \"cc6f049b3aae03c0fec0542f838c43b74900d5312469ce9addfb3b82fd22b201\"" May 14 23:55:04.617945 containerd[1458]: time="2025-05-14T23:55:04.617907937Z" level=info msg="Forcibly stopping sandbox \"cc6f049b3aae03c0fec0542f838c43b74900d5312469ce9addfb3b82fd22b201\"" May 14 23:55:04.618014 containerd[1458]: time="2025-05-14T23:55:04.617991376Z" level=info msg="TearDown network for sandbox \"cc6f049b3aae03c0fec0542f838c43b74900d5312469ce9addfb3b82fd22b201\" successfully" May 14 23:55:04.620379 containerd[1458]: time="2025-05-14T23:55:04.620341103Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"cc6f049b3aae03c0fec0542f838c43b74900d5312469ce9addfb3b82fd22b201\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 14 23:55:04.620437 containerd[1458]: time="2025-05-14T23:55:04.620406462Z" level=info msg="RemovePodSandbox \"cc6f049b3aae03c0fec0542f838c43b74900d5312469ce9addfb3b82fd22b201\" returns successfully" May 14 23:55:04.620822 containerd[1458]: time="2025-05-14T23:55:04.620799376Z" level=info msg="StopPodSandbox for \"d4994a3fe97aedde1b1f50d461d41e37685e1713cc1098d2aa9daa494cf0e469\"" May 14 23:55:04.620915 containerd[1458]: time="2025-05-14T23:55:04.620900255Z" level=info msg="TearDown network for sandbox \"d4994a3fe97aedde1b1f50d461d41e37685e1713cc1098d2aa9daa494cf0e469\" successfully" May 14 23:55:04.620960 containerd[1458]: time="2025-05-14T23:55:04.620914055Z" level=info msg="StopPodSandbox for \"d4994a3fe97aedde1b1f50d461d41e37685e1713cc1098d2aa9daa494cf0e469\" returns successfully" May 14 23:55:04.621230 containerd[1458]: time="2025-05-14T23:55:04.621204571Z" level=info msg="RemovePodSandbox for \"d4994a3fe97aedde1b1f50d461d41e37685e1713cc1098d2aa9daa494cf0e469\"" May 14 23:55:04.621261 containerd[1458]: time="2025-05-14T23:55:04.621235010Z" level=info msg="Forcibly stopping sandbox \"d4994a3fe97aedde1b1f50d461d41e37685e1713cc1098d2aa9daa494cf0e469\"" May 14 23:55:04.621362 containerd[1458]: time="2025-05-14T23:55:04.621343249Z" level=info msg="TearDown network for sandbox \"d4994a3fe97aedde1b1f50d461d41e37685e1713cc1098d2aa9daa494cf0e469\" successfully" May 14 23:55:04.623969 containerd[1458]: time="2025-05-14T23:55:04.623928452Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d4994a3fe97aedde1b1f50d461d41e37685e1713cc1098d2aa9daa494cf0e469\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 14 23:55:04.624029 containerd[1458]: time="2025-05-14T23:55:04.623989051Z" level=info msg="RemovePodSandbox \"d4994a3fe97aedde1b1f50d461d41e37685e1713cc1098d2aa9daa494cf0e469\" returns successfully" May 14 23:55:04.624401 containerd[1458]: time="2025-05-14T23:55:04.624368246Z" level=info msg="StopPodSandbox for \"33300e42d9dd9e82c101350186c488335e8fd17e979f85ac1b8f8c792dccbae8\"" May 14 23:55:04.624489 containerd[1458]: time="2025-05-14T23:55:04.624469885Z" level=info msg="TearDown network for sandbox \"33300e42d9dd9e82c101350186c488335e8fd17e979f85ac1b8f8c792dccbae8\" successfully" May 14 23:55:04.624489 containerd[1458]: time="2025-05-14T23:55:04.624485085Z" level=info msg="StopPodSandbox for \"33300e42d9dd9e82c101350186c488335e8fd17e979f85ac1b8f8c792dccbae8\" returns successfully" May 14 23:55:04.624857 containerd[1458]: time="2025-05-14T23:55:04.624834400Z" level=info msg="RemovePodSandbox for \"33300e42d9dd9e82c101350186c488335e8fd17e979f85ac1b8f8c792dccbae8\"" May 14 23:55:04.624896 containerd[1458]: time="2025-05-14T23:55:04.624862439Z" level=info msg="Forcibly stopping sandbox \"33300e42d9dd9e82c101350186c488335e8fd17e979f85ac1b8f8c792dccbae8\"" May 14 23:55:04.624939 containerd[1458]: time="2025-05-14T23:55:04.624926358Z" level=info msg="TearDown network for sandbox \"33300e42d9dd9e82c101350186c488335e8fd17e979f85ac1b8f8c792dccbae8\" successfully" May 14 23:55:04.627317 containerd[1458]: time="2025-05-14T23:55:04.627270125Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"33300e42d9dd9e82c101350186c488335e8fd17e979f85ac1b8f8c792dccbae8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 14 23:55:04.627387 containerd[1458]: time="2025-05-14T23:55:04.627327365Z" level=info msg="RemovePodSandbox \"33300e42d9dd9e82c101350186c488335e8fd17e979f85ac1b8f8c792dccbae8\" returns successfully" May 14 23:55:04.627924 containerd[1458]: time="2025-05-14T23:55:04.627761878Z" level=info msg="StopPodSandbox for \"9ea9ec64e4bb1d03aff564342d1365b5c979d1b4c871a06f698e17f49c805e09\"" May 14 23:55:04.627924 containerd[1458]: time="2025-05-14T23:55:04.627854437Z" level=info msg="TearDown network for sandbox \"9ea9ec64e4bb1d03aff564342d1365b5c979d1b4c871a06f698e17f49c805e09\" successfully" May 14 23:55:04.627924 containerd[1458]: time="2025-05-14T23:55:04.627864077Z" level=info msg="StopPodSandbox for \"9ea9ec64e4bb1d03aff564342d1365b5c979d1b4c871a06f698e17f49c805e09\" returns successfully" May 14 23:55:04.628252 containerd[1458]: time="2025-05-14T23:55:04.628120393Z" level=info msg="RemovePodSandbox for \"9ea9ec64e4bb1d03aff564342d1365b5c979d1b4c871a06f698e17f49c805e09\"" May 14 23:55:04.628288 containerd[1458]: time="2025-05-14T23:55:04.628254672Z" level=info msg="Forcibly stopping sandbox \"9ea9ec64e4bb1d03aff564342d1365b5c979d1b4c871a06f698e17f49c805e09\"" May 14 23:55:04.628338 containerd[1458]: time="2025-05-14T23:55:04.628323311Z" level=info msg="TearDown network for sandbox \"9ea9ec64e4bb1d03aff564342d1365b5c979d1b4c871a06f698e17f49c805e09\" successfully" May 14 23:55:04.630909 containerd[1458]: time="2025-05-14T23:55:04.630843435Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9ea9ec64e4bb1d03aff564342d1365b5c979d1b4c871a06f698e17f49c805e09\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 14 23:55:04.630968 containerd[1458]: time="2025-05-14T23:55:04.630953914Z" level=info msg="RemovePodSandbox \"9ea9ec64e4bb1d03aff564342d1365b5c979d1b4c871a06f698e17f49c805e09\" returns successfully" May 14 23:55:04.631513 containerd[1458]: time="2025-05-14T23:55:04.631329468Z" level=info msg="StopPodSandbox for \"e7d06f4651ae11a7f4134e69e0346a46cf053229c492d7a7a8c2189baf27e9e7\"" May 14 23:55:04.631513 containerd[1458]: time="2025-05-14T23:55:04.631441107Z" level=info msg="TearDown network for sandbox \"e7d06f4651ae11a7f4134e69e0346a46cf053229c492d7a7a8c2189baf27e9e7\" successfully" May 14 23:55:04.631513 containerd[1458]: time="2025-05-14T23:55:04.631453227Z" level=info msg="StopPodSandbox for \"e7d06f4651ae11a7f4134e69e0346a46cf053229c492d7a7a8c2189baf27e9e7\" returns successfully" May 14 23:55:04.631725 containerd[1458]: time="2025-05-14T23:55:04.631702343Z" level=info msg="RemovePodSandbox for \"e7d06f4651ae11a7f4134e69e0346a46cf053229c492d7a7a8c2189baf27e9e7\"" May 14 23:55:04.631756 containerd[1458]: time="2025-05-14T23:55:04.631732823Z" level=info msg="Forcibly stopping sandbox \"e7d06f4651ae11a7f4134e69e0346a46cf053229c492d7a7a8c2189baf27e9e7\"" May 14 23:55:04.631811 containerd[1458]: time="2025-05-14T23:55:04.631797542Z" level=info msg="TearDown network for sandbox \"e7d06f4651ae11a7f4134e69e0346a46cf053229c492d7a7a8c2189baf27e9e7\" successfully" May 14 23:55:04.634082 containerd[1458]: time="2025-05-14T23:55:04.634044750Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e7d06f4651ae11a7f4134e69e0346a46cf053229c492d7a7a8c2189baf27e9e7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 14 23:55:04.634156 containerd[1458]: time="2025-05-14T23:55:04.634104749Z" level=info msg="RemovePodSandbox \"e7d06f4651ae11a7f4134e69e0346a46cf053229c492d7a7a8c2189baf27e9e7\" returns successfully" May 14 23:55:04.634717 containerd[1458]: time="2025-05-14T23:55:04.634599782Z" level=info msg="StopPodSandbox for \"7313f0b18b2336ff83c1906ff4b6136203180e9f01e834d5a22cfef344d02300\"" May 14 23:55:04.634717 containerd[1458]: time="2025-05-14T23:55:04.634693261Z" level=info msg="TearDown network for sandbox \"7313f0b18b2336ff83c1906ff4b6136203180e9f01e834d5a22cfef344d02300\" successfully" May 14 23:55:04.634717 containerd[1458]: time="2025-05-14T23:55:04.634702421Z" level=info msg="StopPodSandbox for \"7313f0b18b2336ff83c1906ff4b6136203180e9f01e834d5a22cfef344d02300\" returns successfully" May 14 23:55:04.635038 containerd[1458]: time="2025-05-14T23:55:04.634986177Z" level=info msg="RemovePodSandbox for \"7313f0b18b2336ff83c1906ff4b6136203180e9f01e834d5a22cfef344d02300\"" May 14 23:55:04.635038 containerd[1458]: time="2025-05-14T23:55:04.635014817Z" level=info msg="Forcibly stopping sandbox \"7313f0b18b2336ff83c1906ff4b6136203180e9f01e834d5a22cfef344d02300\"" May 14 23:55:04.635094 containerd[1458]: time="2025-05-14T23:55:04.635077136Z" level=info msg="TearDown network for sandbox \"7313f0b18b2336ff83c1906ff4b6136203180e9f01e834d5a22cfef344d02300\" successfully" May 14 23:55:04.637494 containerd[1458]: time="2025-05-14T23:55:04.637446062Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7313f0b18b2336ff83c1906ff4b6136203180e9f01e834d5a22cfef344d02300\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 14 23:55:04.637605 containerd[1458]: time="2025-05-14T23:55:04.637502622Z" level=info msg="RemovePodSandbox \"7313f0b18b2336ff83c1906ff4b6136203180e9f01e834d5a22cfef344d02300\" returns successfully" May 14 23:55:04.637944 containerd[1458]: time="2025-05-14T23:55:04.637866217Z" level=info msg="StopPodSandbox for \"03bf40f226371819ebf2af59750aec491c2dd2ab8a0480f6dccfa7c8e4859ac4\"" May 14 23:55:04.643725 containerd[1458]: time="2025-05-14T23:55:04.643673095Z" level=info msg="TearDown network for sandbox \"03bf40f226371819ebf2af59750aec491c2dd2ab8a0480f6dccfa7c8e4859ac4\" successfully" May 14 23:55:04.643725 containerd[1458]: time="2025-05-14T23:55:04.643710574Z" level=info msg="StopPodSandbox for \"03bf40f226371819ebf2af59750aec491c2dd2ab8a0480f6dccfa7c8e4859ac4\" returns successfully" May 14 23:55:04.644301 containerd[1458]: time="2025-05-14T23:55:04.644251527Z" level=info msg="RemovePodSandbox for \"03bf40f226371819ebf2af59750aec491c2dd2ab8a0480f6dccfa7c8e4859ac4\"" May 14 23:55:04.644301 containerd[1458]: time="2025-05-14T23:55:04.644308566Z" level=info msg="Forcibly stopping sandbox \"03bf40f226371819ebf2af59750aec491c2dd2ab8a0480f6dccfa7c8e4859ac4\"" May 14 23:55:04.644416 containerd[1458]: time="2025-05-14T23:55:04.644400565Z" level=info msg="TearDown network for sandbox \"03bf40f226371819ebf2af59750aec491c2dd2ab8a0480f6dccfa7c8e4859ac4\" successfully" May 14 23:55:04.647140 containerd[1458]: time="2025-05-14T23:55:04.647103127Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"03bf40f226371819ebf2af59750aec491c2dd2ab8a0480f6dccfa7c8e4859ac4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 14 23:55:04.647205 containerd[1458]: time="2025-05-14T23:55:04.647160806Z" level=info msg="RemovePodSandbox \"03bf40f226371819ebf2af59750aec491c2dd2ab8a0480f6dccfa7c8e4859ac4\" returns successfully" May 14 23:55:04.647788 containerd[1458]: time="2025-05-14T23:55:04.647597640Z" level=info msg="StopPodSandbox for \"f4af3726f952fd76dc5fff58c58b1d7cce343f4d0d7c542795ad3cf3a2f8dd4b\"" May 14 23:55:04.647788 containerd[1458]: time="2025-05-14T23:55:04.647686839Z" level=info msg="TearDown network for sandbox \"f4af3726f952fd76dc5fff58c58b1d7cce343f4d0d7c542795ad3cf3a2f8dd4b\" successfully" May 14 23:55:04.647788 containerd[1458]: time="2025-05-14T23:55:04.647696598Z" level=info msg="StopPodSandbox for \"f4af3726f952fd76dc5fff58c58b1d7cce343f4d0d7c542795ad3cf3a2f8dd4b\" returns successfully" May 14 23:55:04.647969 containerd[1458]: time="2025-05-14T23:55:04.647942795Z" level=info msg="RemovePodSandbox for \"f4af3726f952fd76dc5fff58c58b1d7cce343f4d0d7c542795ad3cf3a2f8dd4b\"" May 14 23:55:04.648014 containerd[1458]: time="2025-05-14T23:55:04.647971995Z" level=info msg="Forcibly stopping sandbox \"f4af3726f952fd76dc5fff58c58b1d7cce343f4d0d7c542795ad3cf3a2f8dd4b\"" May 14 23:55:04.648092 containerd[1458]: time="2025-05-14T23:55:04.648039274Z" level=info msg="TearDown network for sandbox \"f4af3726f952fd76dc5fff58c58b1d7cce343f4d0d7c542795ad3cf3a2f8dd4b\" successfully" May 14 23:55:04.650361 containerd[1458]: time="2025-05-14T23:55:04.650330801Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f4af3726f952fd76dc5fff58c58b1d7cce343f4d0d7c542795ad3cf3a2f8dd4b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 14 23:55:04.650446 containerd[1458]: time="2025-05-14T23:55:04.650385081Z" level=info msg="RemovePodSandbox \"f4af3726f952fd76dc5fff58c58b1d7cce343f4d0d7c542795ad3cf3a2f8dd4b\" returns successfully" May 14 23:55:04.650844 containerd[1458]: time="2025-05-14T23:55:04.650766115Z" level=info msg="StopPodSandbox for \"2f38b5963a74e4a9a11faddf5c19d5dc8b4278f0741f1db7e46084dff223aed6\"" May 14 23:55:04.650893 containerd[1458]: time="2025-05-14T23:55:04.650856914Z" level=info msg="TearDown network for sandbox \"2f38b5963a74e4a9a11faddf5c19d5dc8b4278f0741f1db7e46084dff223aed6\" successfully" May 14 23:55:04.650893 containerd[1458]: time="2025-05-14T23:55:04.650867034Z" level=info msg="StopPodSandbox for \"2f38b5963a74e4a9a11faddf5c19d5dc8b4278f0741f1db7e46084dff223aed6\" returns successfully" May 14 23:55:04.651340 containerd[1458]: time="2025-05-14T23:55:04.651315828Z" level=info msg="RemovePodSandbox for \"2f38b5963a74e4a9a11faddf5c19d5dc8b4278f0741f1db7e46084dff223aed6\"" May 14 23:55:04.652082 containerd[1458]: time="2025-05-14T23:55:04.651548104Z" level=info msg="Forcibly stopping sandbox \"2f38b5963a74e4a9a11faddf5c19d5dc8b4278f0741f1db7e46084dff223aed6\"" May 14 23:55:04.652082 containerd[1458]: time="2025-05-14T23:55:04.651624663Z" level=info msg="TearDown network for sandbox \"2f38b5963a74e4a9a11faddf5c19d5dc8b4278f0741f1db7e46084dff223aed6\" successfully" May 14 23:55:04.654403 containerd[1458]: time="2025-05-14T23:55:04.654340545Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2f38b5963a74e4a9a11faddf5c19d5dc8b4278f0741f1db7e46084dff223aed6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 14 23:55:04.654578 containerd[1458]: time="2025-05-14T23:55:04.654556582Z" level=info msg="RemovePodSandbox \"2f38b5963a74e4a9a11faddf5c19d5dc8b4278f0741f1db7e46084dff223aed6\" returns successfully" May 14 23:55:04.655038 containerd[1458]: time="2025-05-14T23:55:04.655006816Z" level=info msg="StopPodSandbox for \"7a807a4fb8d7370a237b6d6dd207ab4e10eef4e1309d7532f13a37352406567a\"" May 14 23:55:04.655116 containerd[1458]: time="2025-05-14T23:55:04.655099734Z" level=info msg="TearDown network for sandbox \"7a807a4fb8d7370a237b6d6dd207ab4e10eef4e1309d7532f13a37352406567a\" successfully" May 14 23:55:04.655153 containerd[1458]: time="2025-05-14T23:55:04.655113894Z" level=info msg="StopPodSandbox for \"7a807a4fb8d7370a237b6d6dd207ab4e10eef4e1309d7532f13a37352406567a\" returns successfully" May 14 23:55:04.655470 containerd[1458]: time="2025-05-14T23:55:04.655444010Z" level=info msg="RemovePodSandbox for \"7a807a4fb8d7370a237b6d6dd207ab4e10eef4e1309d7532f13a37352406567a\"" May 14 23:55:04.655605 containerd[1458]: time="2025-05-14T23:55:04.655586168Z" level=info msg="Forcibly stopping sandbox \"7a807a4fb8d7370a237b6d6dd207ab4e10eef4e1309d7532f13a37352406567a\"" May 14 23:55:04.655744 containerd[1458]: time="2025-05-14T23:55:04.655725726Z" level=info msg="TearDown network for sandbox \"7a807a4fb8d7370a237b6d6dd207ab4e10eef4e1309d7532f13a37352406567a\" successfully" May 14 23:55:04.658390 containerd[1458]: time="2025-05-14T23:55:04.658358769Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7a807a4fb8d7370a237b6d6dd207ab4e10eef4e1309d7532f13a37352406567a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 14 23:55:04.658564 containerd[1458]: time="2025-05-14T23:55:04.658526286Z" level=info msg="RemovePodSandbox \"7a807a4fb8d7370a237b6d6dd207ab4e10eef4e1309d7532f13a37352406567a\" returns successfully" May 14 23:55:04.659105 containerd[1458]: time="2025-05-14T23:55:04.659081278Z" level=info msg="StopPodSandbox for \"87ca9c151aa61190fc4680342f5a4b03631cdb8646efa58c4ac2b140adfb783c\"" May 14 23:55:04.659190 containerd[1458]: time="2025-05-14T23:55:04.659175277Z" level=info msg="TearDown network for sandbox \"87ca9c151aa61190fc4680342f5a4b03631cdb8646efa58c4ac2b140adfb783c\" successfully" May 14 23:55:04.659190 containerd[1458]: time="2025-05-14T23:55:04.659187677Z" level=info msg="StopPodSandbox for \"87ca9c151aa61190fc4680342f5a4b03631cdb8646efa58c4ac2b140adfb783c\" returns successfully" May 14 23:55:04.659499 containerd[1458]: time="2025-05-14T23:55:04.659475153Z" level=info msg="RemovePodSandbox for \"87ca9c151aa61190fc4680342f5a4b03631cdb8646efa58c4ac2b140adfb783c\"" May 14 23:55:04.660260 containerd[1458]: time="2025-05-14T23:55:04.659592471Z" level=info msg="Forcibly stopping sandbox \"87ca9c151aa61190fc4680342f5a4b03631cdb8646efa58c4ac2b140adfb783c\"" May 14 23:55:04.660260 containerd[1458]: time="2025-05-14T23:55:04.659695750Z" level=info msg="TearDown network for sandbox \"87ca9c151aa61190fc4680342f5a4b03631cdb8646efa58c4ac2b140adfb783c\" successfully" May 14 23:55:04.662458 containerd[1458]: time="2025-05-14T23:55:04.662414392Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"87ca9c151aa61190fc4680342f5a4b03631cdb8646efa58c4ac2b140adfb783c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 14 23:55:04.662589 containerd[1458]: time="2025-05-14T23:55:04.662569989Z" level=info msg="RemovePodSandbox \"87ca9c151aa61190fc4680342f5a4b03631cdb8646efa58c4ac2b140adfb783c\" returns successfully" May 14 23:55:04.663044 containerd[1458]: time="2025-05-14T23:55:04.663018263Z" level=info msg="StopPodSandbox for \"6069f7b6005f823a6c0f6bedf541af336a1e9360b68e369bfb47b5702685e544\"" May 14 23:55:04.663125 containerd[1458]: time="2025-05-14T23:55:04.663111062Z" level=info msg="TearDown network for sandbox \"6069f7b6005f823a6c0f6bedf541af336a1e9360b68e369bfb47b5702685e544\" successfully" May 14 23:55:04.663156 containerd[1458]: time="2025-05-14T23:55:04.663123982Z" level=info msg="StopPodSandbox for \"6069f7b6005f823a6c0f6bedf541af336a1e9360b68e369bfb47b5702685e544\" returns successfully" May 14 23:55:04.663449 containerd[1458]: time="2025-05-14T23:55:04.663409258Z" level=info msg="RemovePodSandbox for \"6069f7b6005f823a6c0f6bedf541af336a1e9360b68e369bfb47b5702685e544\"" May 14 23:55:04.663500 containerd[1458]: time="2025-05-14T23:55:04.663454697Z" level=info msg="Forcibly stopping sandbox \"6069f7b6005f823a6c0f6bedf541af336a1e9360b68e369bfb47b5702685e544\"" May 14 23:55:04.663565 containerd[1458]: time="2025-05-14T23:55:04.663528256Z" level=info msg="TearDown network for sandbox \"6069f7b6005f823a6c0f6bedf541af336a1e9360b68e369bfb47b5702685e544\" successfully" May 14 23:55:04.665942 containerd[1458]: time="2025-05-14T23:55:04.665905263Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6069f7b6005f823a6c0f6bedf541af336a1e9360b68e369bfb47b5702685e544\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 14 23:55:04.666008 containerd[1458]: time="2025-05-14T23:55:04.665959182Z" level=info msg="RemovePodSandbox \"6069f7b6005f823a6c0f6bedf541af336a1e9360b68e369bfb47b5702685e544\" returns successfully" May 14 23:55:04.666636 containerd[1458]: time="2025-05-14T23:55:04.666391336Z" level=info msg="StopPodSandbox for \"23c766402a447913dea2632e3ae9916eb4a8b85d8ab4676f6d5f899207c95a30\"" May 14 23:55:04.666636 containerd[1458]: time="2025-05-14T23:55:04.666554173Z" level=info msg="TearDown network for sandbox \"23c766402a447913dea2632e3ae9916eb4a8b85d8ab4676f6d5f899207c95a30\" successfully" May 14 23:55:04.666636 containerd[1458]: time="2025-05-14T23:55:04.666567493Z" level=info msg="StopPodSandbox for \"23c766402a447913dea2632e3ae9916eb4a8b85d8ab4676f6d5f899207c95a30\" returns successfully" May 14 23:55:04.668086 containerd[1458]: time="2025-05-14T23:55:04.666934168Z" level=info msg="RemovePodSandbox for \"23c766402a447913dea2632e3ae9916eb4a8b85d8ab4676f6d5f899207c95a30\"" May 14 23:55:04.668086 containerd[1458]: time="2025-05-14T23:55:04.666968168Z" level=info msg="Forcibly stopping sandbox \"23c766402a447913dea2632e3ae9916eb4a8b85d8ab4676f6d5f899207c95a30\"" May 14 23:55:04.668086 containerd[1458]: time="2025-05-14T23:55:04.667026127Z" level=info msg="TearDown network for sandbox \"23c766402a447913dea2632e3ae9916eb4a8b85d8ab4676f6d5f899207c95a30\" successfully" May 14 23:55:04.669484 containerd[1458]: time="2025-05-14T23:55:04.669435533Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"23c766402a447913dea2632e3ae9916eb4a8b85d8ab4676f6d5f899207c95a30\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 14 23:55:04.669628 containerd[1458]: time="2025-05-14T23:55:04.669607371Z" level=info msg="RemovePodSandbox \"23c766402a447913dea2632e3ae9916eb4a8b85d8ab4676f6d5f899207c95a30\" returns successfully" May 14 23:55:04.670055 containerd[1458]: time="2025-05-14T23:55:04.670032485Z" level=info msg="StopPodSandbox for \"25766839fa42ff8f6330755ddff16513a3d473d5b48564d30f0a8e5f7f8abe9d\"" May 14 23:55:04.670266 containerd[1458]: time="2025-05-14T23:55:04.670246802Z" level=info msg="TearDown network for sandbox \"25766839fa42ff8f6330755ddff16513a3d473d5b48564d30f0a8e5f7f8abe9d\" successfully" May 14 23:55:04.670329 containerd[1458]: time="2025-05-14T23:55:04.670316041Z" level=info msg="StopPodSandbox for \"25766839fa42ff8f6330755ddff16513a3d473d5b48564d30f0a8e5f7f8abe9d\" returns successfully" May 14 23:55:04.670822 containerd[1458]: time="2025-05-14T23:55:04.670790874Z" level=info msg="RemovePodSandbox for \"25766839fa42ff8f6330755ddff16513a3d473d5b48564d30f0a8e5f7f8abe9d\"" May 14 23:55:04.670926 containerd[1458]: time="2025-05-14T23:55:04.670825673Z" level=info msg="Forcibly stopping sandbox \"25766839fa42ff8f6330755ddff16513a3d473d5b48564d30f0a8e5f7f8abe9d\"" May 14 23:55:04.670926 containerd[1458]: time="2025-05-14T23:55:04.670894553Z" level=info msg="TearDown network for sandbox \"25766839fa42ff8f6330755ddff16513a3d473d5b48564d30f0a8e5f7f8abe9d\" successfully" May 14 23:55:04.673904 containerd[1458]: time="2025-05-14T23:55:04.673868671Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"25766839fa42ff8f6330755ddff16513a3d473d5b48564d30f0a8e5f7f8abe9d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 14 23:55:04.673973 containerd[1458]: time="2025-05-14T23:55:04.673924470Z" level=info msg="RemovePodSandbox \"25766839fa42ff8f6330755ddff16513a3d473d5b48564d30f0a8e5f7f8abe9d\" returns successfully" May 14 23:55:08.920344 systemd[1]: Started sshd@19-10.0.0.62:22-10.0.0.1:39780.service - OpenSSH per-connection server daemon (10.0.0.1:39780). May 14 23:55:08.967002 sshd[5870]: Accepted publickey for core from 10.0.0.1 port 39780 ssh2: RSA SHA256:smlrS2t7mLsANcNMaj8ka2nbLtrYQtOgzTJzuRdYSOU May 14 23:55:08.968687 sshd-session[5870]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:55:08.973300 systemd-logind[1440]: New session 20 of user core. May 14 23:55:08.985496 systemd[1]: Started session-20.scope - Session 20 of User core. May 14 23:55:09.176596 sshd[5872]: Connection closed by 10.0.0.1 port 39780 May 14 23:55:09.177029 sshd-session[5870]: pam_unix(sshd:session): session closed for user core May 14 23:55:09.181001 systemd[1]: sshd@19-10.0.0.62:22-10.0.0.1:39780.service: Deactivated successfully. May 14 23:55:09.184242 systemd[1]: session-20.scope: Deactivated successfully. May 14 23:55:09.185513 systemd-logind[1440]: Session 20 logged out. Waiting for processes to exit. May 14 23:55:09.186373 systemd-logind[1440]: Removed session 20.