May 10 00:06:14.954062 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] May 10 00:06:14.954083 kernel: Linux version 6.6.89-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT Fri May 9 22:24:49 -00 2025 May 10 00:06:14.954093 kernel: KASLR enabled May 10 00:06:14.954099 kernel: efi: EFI v2.7 by EDK II May 10 00:06:14.954105 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdbbbf018 ACPI 2.0=0xd9b43018 RNG=0xd9b43a18 MEMRESERVE=0xd9b40d98 May 10 00:06:14.954110 kernel: random: crng init done May 10 00:06:14.954117 kernel: secureboot: Secure boot disabled May 10 00:06:14.954123 kernel: ACPI: Early table checksum verification disabled May 10 00:06:14.954129 kernel: ACPI: RSDP 0x00000000D9B43018 000024 (v02 BOCHS ) May 10 00:06:14.954137 kernel: ACPI: XSDT 0x00000000D9B43F18 000064 (v01 BOCHS BXPC 00000001 01000013) May 10 00:06:14.954143 kernel: ACPI: FACP 0x00000000D9B43B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) May 10 00:06:14.954149 kernel: ACPI: DSDT 0x00000000D9B41018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 10 00:06:14.954155 kernel: ACPI: APIC 0x00000000D9B43C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) May 10 00:06:14.954161 kernel: ACPI: PPTT 0x00000000D9B43098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) May 10 00:06:14.954169 kernel: ACPI: GTDT 0x00000000D9B43818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 10 00:06:14.954176 kernel: ACPI: MCFG 0x00000000D9B43A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 10 00:06:14.954183 kernel: ACPI: SPCR 0x00000000D9B43918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 10 00:06:14.954189 kernel: ACPI: DBG2 0x00000000D9B43998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) May 10 00:06:14.954196 kernel: ACPI: IORT 0x00000000D9B43198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 10 00:06:14.954202 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 May 10 00:06:14.954209 kernel: NUMA: Failed to initialise from firmware May 10 00:06:14.954215 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] May 10 00:06:14.954221 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] May 10 00:06:14.954228 kernel: Zone ranges: May 10 00:06:14.954234 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] May 10 00:06:14.954241 kernel: DMA32 empty May 10 00:06:14.954248 kernel: Normal empty May 10 00:06:14.954254 kernel: Movable zone start for each node May 10 00:06:14.954260 kernel: Early memory node ranges May 10 00:06:14.954266 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] May 10 00:06:14.954273 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] May 10 00:06:14.954279 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] May 10 00:06:14.954285 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] May 10 00:06:14.954291 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] May 10 00:06:14.954298 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] May 10 00:06:14.954304 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] May 10 00:06:14.954310 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] May 10 00:06:14.954318 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges May 10 00:06:14.954324 kernel: psci: probing for conduit method from ACPI. May 10 00:06:14.954331 kernel: psci: PSCIv1.1 detected in firmware. May 10 00:06:14.954340 kernel: psci: Using standard PSCI v0.2 function IDs May 10 00:06:14.954347 kernel: psci: Trusted OS migration not required May 10 00:06:14.954353 kernel: psci: SMC Calling Convention v1.1 May 10 00:06:14.954362 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) May 10 00:06:14.954369 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 May 10 00:06:14.954375 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 May 10 00:06:14.954383 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 May 10 00:06:14.954389 kernel: Detected PIPT I-cache on CPU0 May 10 00:06:14.954396 kernel: CPU features: detected: GIC system register CPU interface May 10 00:06:14.954403 kernel: CPU features: detected: Hardware dirty bit management May 10 00:06:14.954409 kernel: CPU features: detected: Spectre-v4 May 10 00:06:14.954416 kernel: CPU features: detected: Spectre-BHB May 10 00:06:14.954423 kernel: CPU features: kernel page table isolation forced ON by KASLR May 10 00:06:14.954438 kernel: CPU features: detected: Kernel page table isolation (KPTI) May 10 00:06:14.954445 kernel: CPU features: detected: ARM erratum 1418040 May 10 00:06:14.954452 kernel: CPU features: detected: SSBS not fully self-synchronizing May 10 00:06:14.954458 kernel: alternatives: applying boot alternatives May 10 00:06:14.954466 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=9a99b6d651f8aeb5d7bfd4370bc36449b7e5138d2f42e40e0aede009df00f5a4 May 10 00:06:14.954473 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 10 00:06:14.954480 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 10 00:06:14.954487 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 10 00:06:14.954494 kernel: Fallback order for Node 0: 0 May 10 00:06:14.954500 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 May 10 00:06:14.954507 kernel: Policy zone: DMA May 10 00:06:14.954516 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 10 00:06:14.954522 kernel: software IO TLB: area num 4. May 10 00:06:14.954529 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) May 10 00:06:14.954536 kernel: Memory: 2386260K/2572288K available (10240K kernel code, 2186K rwdata, 8104K rodata, 39744K init, 897K bss, 186028K reserved, 0K cma-reserved) May 10 00:06:14.954543 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 10 00:06:14.954550 kernel: rcu: Preemptible hierarchical RCU implementation. May 10 00:06:14.954557 kernel: rcu: RCU event tracing is enabled. May 10 00:06:14.954564 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 10 00:06:14.954571 kernel: Trampoline variant of Tasks RCU enabled. May 10 00:06:14.954578 kernel: Tracing variant of Tasks RCU enabled. May 10 00:06:14.954584 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 10 00:06:14.954591 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 10 00:06:14.954600 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 May 10 00:06:14.954606 kernel: GICv3: 256 SPIs implemented May 10 00:06:14.954613 kernel: GICv3: 0 Extended SPIs implemented May 10 00:06:14.954620 kernel: Root IRQ handler: gic_handle_irq May 10 00:06:14.954626 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI May 10 00:06:14.954633 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 May 10 00:06:14.954640 kernel: ITS [mem 0x08080000-0x0809ffff] May 10 00:06:14.954646 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) May 10 00:06:14.954653 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) May 10 00:06:14.954660 kernel: GICv3: using LPI property table @0x00000000400f0000 May 10 00:06:14.954667 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 May 10 00:06:14.954675 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 10 00:06:14.954682 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 10 00:06:14.954688 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). May 10 00:06:14.954695 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns May 10 00:06:14.954702 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns May 10 00:06:14.954709 kernel: arm-pv: using stolen time PV May 10 00:06:14.954716 kernel: Console: colour dummy device 80x25 May 10 00:06:14.954723 kernel: ACPI: Core revision 20230628 May 10 00:06:14.954730 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) May 10 00:06:14.954737 kernel: pid_max: default: 32768 minimum: 301 May 10 00:06:14.954745 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 10 00:06:14.954752 kernel: landlock: Up and running. May 10 00:06:14.954759 kernel: SELinux: Initializing. May 10 00:06:14.954766 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 10 00:06:14.954773 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 10 00:06:14.954780 kernel: ACPI PPTT: PPTT table found, but unable to locate core 3 (3) May 10 00:06:14.954788 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 10 00:06:14.954795 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 10 00:06:14.954802 kernel: rcu: Hierarchical SRCU implementation. May 10 00:06:14.954810 kernel: rcu: Max phase no-delay instances is 400. May 10 00:06:14.954818 kernel: Platform MSI: ITS@0x8080000 domain created May 10 00:06:14.954825 kernel: PCI/MSI: ITS@0x8080000 domain created May 10 00:06:14.954832 kernel: Remapping and enabling EFI services. May 10 00:06:14.954839 kernel: smp: Bringing up secondary CPUs ... May 10 00:06:14.954854 kernel: Detected PIPT I-cache on CPU1 May 10 00:06:14.954862 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 May 10 00:06:14.954869 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 May 10 00:06:14.954876 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 10 00:06:14.954883 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] May 10 00:06:14.954892 kernel: Detected PIPT I-cache on CPU2 May 10 00:06:14.954900 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 May 10 00:06:14.954912 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 May 10 00:06:14.954920 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 10 00:06:14.954928 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] May 10 00:06:14.954935 kernel: Detected PIPT I-cache on CPU3 May 10 00:06:14.954942 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 May 10 00:06:14.954949 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 May 10 00:06:14.954956 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 10 00:06:14.954964 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] May 10 00:06:14.954972 kernel: smp: Brought up 1 node, 4 CPUs May 10 00:06:14.954979 kernel: SMP: Total of 4 processors activated. May 10 00:06:14.954987 kernel: CPU features: detected: 32-bit EL0 Support May 10 00:06:14.954994 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence May 10 00:06:14.955001 kernel: CPU features: detected: Common not Private translations May 10 00:06:14.955009 kernel: CPU features: detected: CRC32 instructions May 10 00:06:14.955016 kernel: CPU features: detected: Enhanced Virtualization Traps May 10 00:06:14.955026 kernel: CPU features: detected: RCpc load-acquire (LDAPR) May 10 00:06:14.955034 kernel: CPU features: detected: LSE atomic instructions May 10 00:06:14.955041 kernel: CPU features: detected: Privileged Access Never May 10 00:06:14.955049 kernel: CPU features: detected: RAS Extension Support May 10 00:06:14.955056 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) May 10 00:06:14.955063 kernel: CPU: All CPU(s) started at EL1 May 10 00:06:14.955070 kernel: alternatives: applying system-wide alternatives May 10 00:06:14.955078 kernel: devtmpfs: initialized May 10 00:06:14.955085 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 10 00:06:14.955094 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 10 00:06:14.955101 kernel: pinctrl core: initialized pinctrl subsystem May 10 00:06:14.955108 kernel: SMBIOS 3.0.0 present. May 10 00:06:14.955116 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 May 10 00:06:14.955123 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 10 00:06:14.955130 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations May 10 00:06:14.955138 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations May 10 00:06:14.955145 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations May 10 00:06:14.955153 kernel: audit: initializing netlink subsys (disabled) May 10 00:06:14.955161 kernel: audit: type=2000 audit(0.018:1): state=initialized audit_enabled=0 res=1 May 10 00:06:14.955169 kernel: thermal_sys: Registered thermal governor 'step_wise' May 10 00:06:14.955176 kernel: cpuidle: using governor menu May 10 00:06:14.955183 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. May 10 00:06:14.955191 kernel: ASID allocator initialised with 32768 entries May 10 00:06:14.955198 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 10 00:06:14.955205 kernel: Serial: AMBA PL011 UART driver May 10 00:06:14.955213 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL May 10 00:06:14.955220 kernel: Modules: 0 pages in range for non-PLT usage May 10 00:06:14.955229 kernel: Modules: 508944 pages in range for PLT usage May 10 00:06:14.955236 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 10 00:06:14.955243 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page May 10 00:06:14.955250 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages May 10 00:06:14.955258 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page May 10 00:06:14.955265 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 10 00:06:14.955272 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page May 10 00:06:14.955280 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages May 10 00:06:14.955287 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page May 10 00:06:14.955296 kernel: ACPI: Added _OSI(Module Device) May 10 00:06:14.955303 kernel: ACPI: Added _OSI(Processor Device) May 10 00:06:14.955310 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 10 00:06:14.955318 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 10 00:06:14.955325 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 10 00:06:14.955332 kernel: ACPI: Interpreter enabled May 10 00:06:14.955340 kernel: ACPI: Using GIC for interrupt routing May 10 00:06:14.955347 kernel: ACPI: MCFG table detected, 1 entries May 10 00:06:14.955354 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA May 10 00:06:14.955363 kernel: printk: console [ttyAMA0] enabled May 10 00:06:14.955370 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 10 00:06:14.955511 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 10 00:06:14.955588 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] May 10 00:06:14.955655 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] May 10 00:06:14.955717 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 May 10 00:06:14.955781 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] May 10 00:06:14.955792 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] May 10 00:06:14.955800 kernel: PCI host bridge to bus 0000:00 May 10 00:06:14.955911 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] May 10 00:06:14.955973 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] May 10 00:06:14.956029 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] May 10 00:06:14.956084 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 10 00:06:14.956180 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 May 10 00:06:14.956269 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 May 10 00:06:14.956350 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] May 10 00:06:14.956416 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] May 10 00:06:14.956496 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] May 10 00:06:14.956566 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] May 10 00:06:14.956634 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] May 10 00:06:14.956701 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] May 10 00:06:14.956765 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] May 10 00:06:14.956823 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] May 10 00:06:14.956923 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] May 10 00:06:14.956934 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 May 10 00:06:14.956942 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 May 10 00:06:14.956949 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 May 10 00:06:14.956956 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 May 10 00:06:14.956964 kernel: iommu: Default domain type: Translated May 10 00:06:14.956974 kernel: iommu: DMA domain TLB invalidation policy: strict mode May 10 00:06:14.956982 kernel: efivars: Registered efivars operations May 10 00:06:14.956989 kernel: vgaarb: loaded May 10 00:06:14.956997 kernel: clocksource: Switched to clocksource arch_sys_counter May 10 00:06:14.957005 kernel: VFS: Disk quotas dquot_6.6.0 May 10 00:06:14.957012 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 10 00:06:14.957019 kernel: pnp: PnP ACPI init May 10 00:06:14.957099 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved May 10 00:06:14.957111 kernel: pnp: PnP ACPI: found 1 devices May 10 00:06:14.957119 kernel: NET: Registered PF_INET protocol family May 10 00:06:14.957126 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 10 00:06:14.957134 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 10 00:06:14.957141 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 10 00:06:14.957149 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 10 00:06:14.957156 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 10 00:06:14.957163 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 10 00:06:14.957170 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 10 00:06:14.957179 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 10 00:06:14.957186 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 10 00:06:14.957193 kernel: PCI: CLS 0 bytes, default 64 May 10 00:06:14.957200 kernel: kvm [1]: HYP mode not available May 10 00:06:14.957208 kernel: Initialise system trusted keyrings May 10 00:06:14.957215 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 10 00:06:14.957222 kernel: Key type asymmetric registered May 10 00:06:14.957229 kernel: Asymmetric key parser 'x509' registered May 10 00:06:14.957236 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) May 10 00:06:14.957245 kernel: io scheduler mq-deadline registered May 10 00:06:14.957252 kernel: io scheduler kyber registered May 10 00:06:14.957259 kernel: io scheduler bfq registered May 10 00:06:14.957266 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 May 10 00:06:14.957273 kernel: ACPI: button: Power Button [PWRB] May 10 00:06:14.957281 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 May 10 00:06:14.957346 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) May 10 00:06:14.957356 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 10 00:06:14.957363 kernel: thunder_xcv, ver 1.0 May 10 00:06:14.957372 kernel: thunder_bgx, ver 1.0 May 10 00:06:14.957379 kernel: nicpf, ver 1.0 May 10 00:06:14.957386 kernel: nicvf, ver 1.0 May 10 00:06:14.957466 kernel: rtc-efi rtc-efi.0: registered as rtc0 May 10 00:06:14.957529 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-05-10T00:06:14 UTC (1746835574) May 10 00:06:14.957539 kernel: hid: raw HID events driver (C) Jiri Kosina May 10 00:06:14.957547 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available May 10 00:06:14.957554 kernel: watchdog: Delayed init of the lockup detector failed: -19 May 10 00:06:14.957564 kernel: watchdog: Hard watchdog permanently disabled May 10 00:06:14.957571 kernel: NET: Registered PF_INET6 protocol family May 10 00:06:14.957579 kernel: Segment Routing with IPv6 May 10 00:06:14.957586 kernel: In-situ OAM (IOAM) with IPv6 May 10 00:06:14.957593 kernel: NET: Registered PF_PACKET protocol family May 10 00:06:14.957601 kernel: Key type dns_resolver registered May 10 00:06:14.957608 kernel: registered taskstats version 1 May 10 00:06:14.957615 kernel: Loading compiled-in X.509 certificates May 10 00:06:14.957622 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.89-flatcar: ce481d22c53070871912748985d4044dfd149966' May 10 00:06:14.957631 kernel: Key type .fscrypt registered May 10 00:06:14.957638 kernel: Key type fscrypt-provisioning registered May 10 00:06:14.957645 kernel: ima: No TPM chip found, activating TPM-bypass! May 10 00:06:14.957652 kernel: ima: Allocated hash algorithm: sha1 May 10 00:06:14.957659 kernel: ima: No architecture policies found May 10 00:06:14.957667 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) May 10 00:06:14.957674 kernel: clk: Disabling unused clocks May 10 00:06:14.957681 kernel: Freeing unused kernel memory: 39744K May 10 00:06:14.957688 kernel: Run /init as init process May 10 00:06:14.957697 kernel: with arguments: May 10 00:06:14.957704 kernel: /init May 10 00:06:14.957711 kernel: with environment: May 10 00:06:14.957718 kernel: HOME=/ May 10 00:06:14.957725 kernel: TERM=linux May 10 00:06:14.957732 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 10 00:06:14.957741 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 10 00:06:14.957750 systemd[1]: Detected virtualization kvm. May 10 00:06:14.957760 systemd[1]: Detected architecture arm64. May 10 00:06:14.957767 systemd[1]: Running in initrd. May 10 00:06:14.957775 systemd[1]: No hostname configured, using default hostname. May 10 00:06:14.957782 systemd[1]: Hostname set to . May 10 00:06:14.957790 systemd[1]: Initializing machine ID from VM UUID. May 10 00:06:14.957798 systemd[1]: Queued start job for default target initrd.target. May 10 00:06:14.957806 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 10 00:06:14.957814 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 10 00:06:14.957824 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 10 00:06:14.957832 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 10 00:06:14.957849 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 10 00:06:14.957858 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 10 00:06:14.957868 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 10 00:06:14.957876 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 10 00:06:14.957884 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 10 00:06:14.957895 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 10 00:06:14.957903 systemd[1]: Reached target paths.target - Path Units. May 10 00:06:14.957910 systemd[1]: Reached target slices.target - Slice Units. May 10 00:06:14.957918 systemd[1]: Reached target swap.target - Swaps. May 10 00:06:14.957926 systemd[1]: Reached target timers.target - Timer Units. May 10 00:06:14.957933 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 10 00:06:14.957941 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 10 00:06:14.957949 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 10 00:06:14.957958 systemd[1]: Listening on systemd-journald.socket - Journal Socket. May 10 00:06:14.957965 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 10 00:06:14.957973 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 10 00:06:14.957995 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 10 00:06:14.958003 systemd[1]: Reached target sockets.target - Socket Units. May 10 00:06:14.958010 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 10 00:06:14.958018 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 10 00:06:14.958026 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 10 00:06:14.958033 systemd[1]: Starting systemd-fsck-usr.service... May 10 00:06:14.958042 systemd[1]: Starting systemd-journald.service - Journal Service... May 10 00:06:14.958050 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 10 00:06:14.958057 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 10 00:06:14.958065 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 10 00:06:14.958073 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 10 00:06:14.958080 systemd[1]: Finished systemd-fsck-usr.service. May 10 00:06:14.958090 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 10 00:06:14.958098 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 10 00:06:14.958124 systemd-journald[237]: Collecting audit messages is disabled. May 10 00:06:14.958144 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 10 00:06:14.958153 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 10 00:06:14.958161 systemd-journald[237]: Journal started May 10 00:06:14.958180 systemd-journald[237]: Runtime Journal (/run/log/journal/e82480c39650407e8b7136a1fcd5abde) is 5.9M, max 47.3M, 41.4M free. May 10 00:06:14.949581 systemd-modules-load[238]: Inserted module 'overlay' May 10 00:06:14.961869 systemd[1]: Started systemd-journald.service - Journal Service. May 10 00:06:14.961902 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 10 00:06:14.965008 systemd-modules-load[238]: Inserted module 'br_netfilter' May 10 00:06:14.966139 kernel: Bridge firewalling registered May 10 00:06:14.966383 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 10 00:06:14.980079 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 10 00:06:14.981918 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 10 00:06:14.984017 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 10 00:06:14.986032 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 10 00:06:14.989774 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 10 00:06:14.993527 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 10 00:06:14.999651 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 10 00:06:15.001225 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 10 00:06:15.004232 dracut-cmdline[271]: dracut-dracut-053 May 10 00:06:15.006401 dracut-cmdline[271]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=9a99b6d651f8aeb5d7bfd4370bc36449b7e5138d2f42e40e0aede009df00f5a4 May 10 00:06:15.012024 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 10 00:06:15.035056 systemd-resolved[288]: Positive Trust Anchors: May 10 00:06:15.035133 systemd-resolved[288]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 10 00:06:15.035164 systemd-resolved[288]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 10 00:06:15.039975 systemd-resolved[288]: Defaulting to hostname 'linux'. May 10 00:06:15.042438 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 10 00:06:15.043376 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 10 00:06:15.075872 kernel: SCSI subsystem initialized May 10 00:06:15.080866 kernel: Loading iSCSI transport class v2.0-870. May 10 00:06:15.087873 kernel: iscsi: registered transport (tcp) May 10 00:06:15.101222 kernel: iscsi: registered transport (qla4xxx) May 10 00:06:15.101281 kernel: QLogic iSCSI HBA Driver May 10 00:06:15.144889 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 10 00:06:15.156015 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 10 00:06:15.172074 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 10 00:06:15.172149 kernel: device-mapper: uevent: version 1.0.3 May 10 00:06:15.172166 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 10 00:06:15.223871 kernel: raid6: neonx8 gen() 15786 MB/s May 10 00:06:15.240860 kernel: raid6: neonx4 gen() 14723 MB/s May 10 00:06:15.257863 kernel: raid6: neonx2 gen() 12969 MB/s May 10 00:06:15.274864 kernel: raid6: neonx1 gen() 10287 MB/s May 10 00:06:15.291860 kernel: raid6: int64x8 gen() 6938 MB/s May 10 00:06:15.308866 kernel: raid6: int64x4 gen() 7356 MB/s May 10 00:06:15.325859 kernel: raid6: int64x2 gen() 6108 MB/s May 10 00:06:15.342871 kernel: raid6: int64x1 gen() 5005 MB/s May 10 00:06:15.342898 kernel: raid6: using algorithm neonx8 gen() 15786 MB/s May 10 00:06:15.359862 kernel: raid6: .... xor() 11934 MB/s, rmw enabled May 10 00:06:15.359877 kernel: raid6: using neon recovery algorithm May 10 00:06:15.365047 kernel: xor: measuring software checksum speed May 10 00:06:15.365065 kernel: 8regs : 19797 MB/sec May 10 00:06:15.366137 kernel: 32regs : 19679 MB/sec May 10 00:06:15.366151 kernel: arm64_neon : 27034 MB/sec May 10 00:06:15.366160 kernel: xor: using function: arm64_neon (27034 MB/sec) May 10 00:06:15.421392 kernel: Btrfs loaded, zoned=no, fsverity=no May 10 00:06:15.440129 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 10 00:06:15.452047 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 10 00:06:15.465277 systemd-udevd[464]: Using default interface naming scheme 'v255'. May 10 00:06:15.468426 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 10 00:06:15.471285 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 10 00:06:15.486191 dracut-pre-trigger[469]: rd.md=0: removing MD RAID activation May 10 00:06:15.516540 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 10 00:06:15.526022 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 10 00:06:15.566956 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 10 00:06:15.580050 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 10 00:06:15.594922 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 10 00:06:15.596746 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 10 00:06:15.599127 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 10 00:06:15.603609 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 10 00:06:15.614015 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 10 00:06:15.619559 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues May 10 00:06:15.619721 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 10 00:06:15.622879 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 10 00:06:15.622923 kernel: GPT:9289727 != 19775487 May 10 00:06:15.622934 kernel: GPT:Alternate GPT header not at the end of the disk. May 10 00:06:15.624186 kernel: GPT:9289727 != 19775487 May 10 00:06:15.624220 kernel: GPT: Use GNU Parted to correct GPT errors. May 10 00:06:15.624978 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 10 00:06:15.626482 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 10 00:06:15.632210 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 10 00:06:15.632323 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 10 00:06:15.635980 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 10 00:06:15.637364 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 10 00:06:15.637748 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 10 00:06:15.640183 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 10 00:06:15.654867 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (507) May 10 00:06:15.657555 kernel: BTRFS: device fsid 278061fd-7ea0-499f-a3bc-343431c2d8fa devid 1 transid 39 /dev/vda3 scanned by (udev-worker) (511) May 10 00:06:15.655184 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 10 00:06:15.666551 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 10 00:06:15.672008 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 10 00:06:15.680013 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 10 00:06:15.684140 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 10 00:06:15.685537 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 10 00:06:15.691407 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 10 00:06:15.702006 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 10 00:06:15.703982 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 10 00:06:15.710510 disk-uuid[553]: Primary Header is updated. May 10 00:06:15.710510 disk-uuid[553]: Secondary Entries is updated. May 10 00:06:15.710510 disk-uuid[553]: Secondary Header is updated. May 10 00:06:15.715858 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 10 00:06:15.732679 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 10 00:06:16.735861 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 10 00:06:16.736278 disk-uuid[554]: The operation has completed successfully. May 10 00:06:16.761569 systemd[1]: disk-uuid.service: Deactivated successfully. May 10 00:06:16.761702 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 10 00:06:16.784083 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 10 00:06:16.788248 sh[575]: Success May 10 00:06:16.813422 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" May 10 00:06:16.845186 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 10 00:06:16.852808 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 10 00:06:16.855447 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 10 00:06:16.866463 kernel: BTRFS info (device dm-0): first mount of filesystem 278061fd-7ea0-499f-a3bc-343431c2d8fa May 10 00:06:16.866503 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm May 10 00:06:16.866514 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 10 00:06:16.868064 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 10 00:06:16.868084 kernel: BTRFS info (device dm-0): using free space tree May 10 00:06:16.872319 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 10 00:06:16.873889 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 10 00:06:16.883983 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 10 00:06:16.885607 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 10 00:06:16.893960 kernel: BTRFS info (device vda6): first mount of filesystem 8d2a58d1-82bb-4bb8-8ae0-4baddd3cc4e0 May 10 00:06:16.894026 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 10 00:06:16.894037 kernel: BTRFS info (device vda6): using free space tree May 10 00:06:16.898895 kernel: BTRFS info (device vda6): auto enabling async discard May 10 00:06:16.913071 systemd[1]: mnt-oem.mount: Deactivated successfully. May 10 00:06:16.914919 kernel: BTRFS info (device vda6): last unmount of filesystem 8d2a58d1-82bb-4bb8-8ae0-4baddd3cc4e0 May 10 00:06:16.921760 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 10 00:06:16.929037 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 10 00:06:17.005899 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 10 00:06:17.019043 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 10 00:06:17.052628 systemd-networkd[765]: lo: Link UP May 10 00:06:17.052640 systemd-networkd[765]: lo: Gained carrier May 10 00:06:17.053717 systemd-networkd[765]: Enumeration completed May 10 00:06:17.054833 systemd-networkd[765]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 10 00:06:17.054837 systemd-networkd[765]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 10 00:06:17.055135 systemd[1]: Started systemd-networkd.service - Network Configuration. May 10 00:06:17.057142 systemd-networkd[765]: eth0: Link UP May 10 00:06:17.057145 systemd-networkd[765]: eth0: Gained carrier May 10 00:06:17.057154 systemd-networkd[765]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 10 00:06:17.057178 systemd[1]: Reached target network.target - Network. May 10 00:06:17.076768 ignition[669]: Ignition 2.20.0 May 10 00:06:17.076779 ignition[669]: Stage: fetch-offline May 10 00:06:17.076817 ignition[669]: no configs at "/usr/lib/ignition/base.d" May 10 00:06:17.076826 ignition[669]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 10 00:06:17.077078 ignition[669]: parsed url from cmdline: "" May 10 00:06:17.077081 ignition[669]: no config URL provided May 10 00:06:17.077085 ignition[669]: reading system config file "/usr/lib/ignition/user.ign" May 10 00:06:17.077093 ignition[669]: no config at "/usr/lib/ignition/user.ign" May 10 00:06:17.077126 ignition[669]: op(1): [started] loading QEMU firmware config module May 10 00:06:17.077131 ignition[669]: op(1): executing: "modprobe" "qemu_fw_cfg" May 10 00:06:17.083908 systemd-networkd[765]: eth0: DHCPv4 address 10.0.0.141/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 10 00:06:17.083221 ignition[669]: op(1): [finished] loading QEMU firmware config module May 10 00:06:17.083243 ignition[669]: QEMU firmware config was not found. Ignoring... May 10 00:06:17.126568 ignition[669]: parsing config with SHA512: c8bf1edfd35f504f26e2cd94ed706ba859b5351b1c942299955d64399231fc9641a4b9572b26353772050bb5de39eefba25a1829f88f2c46fdf9108048291a3c May 10 00:06:17.134071 unknown[669]: fetched base config from "system" May 10 00:06:17.134082 unknown[669]: fetched user config from "qemu" May 10 00:06:17.134586 ignition[669]: fetch-offline: fetch-offline passed May 10 00:06:17.134677 ignition[669]: Ignition finished successfully May 10 00:06:17.136735 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 10 00:06:17.138696 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 10 00:06:17.149039 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 10 00:06:17.160690 ignition[773]: Ignition 2.20.0 May 10 00:06:17.160700 ignition[773]: Stage: kargs May 10 00:06:17.160871 ignition[773]: no configs at "/usr/lib/ignition/base.d" May 10 00:06:17.160881 ignition[773]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 10 00:06:17.161739 ignition[773]: kargs: kargs passed May 10 00:06:17.161781 ignition[773]: Ignition finished successfully May 10 00:06:17.163931 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 10 00:06:17.174023 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 10 00:06:17.183482 ignition[782]: Ignition 2.20.0 May 10 00:06:17.183493 ignition[782]: Stage: disks May 10 00:06:17.183663 ignition[782]: no configs at "/usr/lib/ignition/base.d" May 10 00:06:17.183672 ignition[782]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 10 00:06:17.184567 ignition[782]: disks: disks passed May 10 00:06:17.186895 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 10 00:06:17.184616 ignition[782]: Ignition finished successfully May 10 00:06:17.188170 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 10 00:06:17.189298 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 10 00:06:17.190857 systemd[1]: Reached target local-fs.target - Local File Systems. May 10 00:06:17.192174 systemd[1]: Reached target sysinit.target - System Initialization. May 10 00:06:17.193767 systemd[1]: Reached target basic.target - Basic System. May 10 00:06:17.206000 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 10 00:06:17.217387 systemd-fsck[793]: ROOT: clean, 14/553520 files, 52654/553472 blocks May 10 00:06:17.221125 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 10 00:06:17.223151 systemd[1]: Mounting sysroot.mount - /sysroot... May 10 00:06:17.271731 systemd[1]: Mounted sysroot.mount - /sysroot. May 10 00:06:17.273053 kernel: EXT4-fs (vda9): mounted filesystem caef9e74-1f21-4595-8586-7560f5103527 r/w with ordered data mode. Quota mode: none. May 10 00:06:17.272888 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 10 00:06:17.283919 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 10 00:06:17.285495 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 10 00:06:17.286854 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 10 00:06:17.290892 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (801) May 10 00:06:17.286894 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 10 00:06:17.286915 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 10 00:06:17.293870 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 10 00:06:17.298051 kernel: BTRFS info (device vda6): first mount of filesystem 8d2a58d1-82bb-4bb8-8ae0-4baddd3cc4e0 May 10 00:06:17.298077 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 10 00:06:17.298088 kernel: BTRFS info (device vda6): using free space tree May 10 00:06:17.298098 kernel: BTRFS info (device vda6): auto enabling async discard May 10 00:06:17.298066 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 10 00:06:17.300408 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 10 00:06:17.340488 initrd-setup-root[825]: cut: /sysroot/etc/passwd: No such file or directory May 10 00:06:17.344734 initrd-setup-root[832]: cut: /sysroot/etc/group: No such file or directory May 10 00:06:17.348861 initrd-setup-root[839]: cut: /sysroot/etc/shadow: No such file or directory May 10 00:06:17.352287 initrd-setup-root[846]: cut: /sysroot/etc/gshadow: No such file or directory May 10 00:06:17.422996 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 10 00:06:17.432969 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 10 00:06:17.434315 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 10 00:06:17.438868 kernel: BTRFS info (device vda6): last unmount of filesystem 8d2a58d1-82bb-4bb8-8ae0-4baddd3cc4e0 May 10 00:06:17.454402 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 10 00:06:17.456575 ignition[914]: INFO : Ignition 2.20.0 May 10 00:06:17.456575 ignition[914]: INFO : Stage: mount May 10 00:06:17.457786 ignition[914]: INFO : no configs at "/usr/lib/ignition/base.d" May 10 00:06:17.457786 ignition[914]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 10 00:06:17.457786 ignition[914]: INFO : mount: mount passed May 10 00:06:17.457786 ignition[914]: INFO : Ignition finished successfully May 10 00:06:17.458986 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 10 00:06:17.469967 systemd[1]: Starting ignition-files.service - Ignition (files)... May 10 00:06:17.865910 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 10 00:06:17.875061 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 10 00:06:17.880875 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (928) May 10 00:06:17.882943 kernel: BTRFS info (device vda6): first mount of filesystem 8d2a58d1-82bb-4bb8-8ae0-4baddd3cc4e0 May 10 00:06:17.882964 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 10 00:06:17.882983 kernel: BTRFS info (device vda6): using free space tree May 10 00:06:17.885867 kernel: BTRFS info (device vda6): auto enabling async discard May 10 00:06:17.887326 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 10 00:06:17.905258 ignition[945]: INFO : Ignition 2.20.0 May 10 00:06:17.905258 ignition[945]: INFO : Stage: files May 10 00:06:17.906589 ignition[945]: INFO : no configs at "/usr/lib/ignition/base.d" May 10 00:06:17.906589 ignition[945]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 10 00:06:17.906589 ignition[945]: DEBUG : files: compiled without relabeling support, skipping May 10 00:06:17.909121 ignition[945]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 10 00:06:17.909121 ignition[945]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 10 00:06:17.913576 ignition[945]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 10 00:06:17.914682 ignition[945]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 10 00:06:17.914682 ignition[945]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 10 00:06:17.914218 unknown[945]: wrote ssh authorized keys file for user: core May 10 00:06:17.917662 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 10 00:06:17.917662 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 May 10 00:06:17.970919 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 10 00:06:18.110009 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 10 00:06:18.111745 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" May 10 00:06:18.111745 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" May 10 00:06:18.111745 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" May 10 00:06:18.111745 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" May 10 00:06:18.111745 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 10 00:06:18.111745 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 10 00:06:18.111745 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 10 00:06:18.111745 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 10 00:06:18.111745 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" May 10 00:06:18.128462 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 10 00:06:18.128462 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 10 00:06:18.128462 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 10 00:06:18.128462 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 10 00:06:18.128462 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 May 10 00:06:18.439458 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK May 10 00:06:18.486084 systemd-networkd[765]: eth0: Gained IPv6LL May 10 00:06:18.839177 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 10 00:06:18.839177 ignition[945]: INFO : files: op(b): [started] processing unit "prepare-helm.service" May 10 00:06:18.842939 ignition[945]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 10 00:06:18.842939 ignition[945]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 10 00:06:18.842939 ignition[945]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" May 10 00:06:18.842939 ignition[945]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" May 10 00:06:18.842939 ignition[945]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 10 00:06:18.842939 ignition[945]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 10 00:06:18.842939 ignition[945]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" May 10 00:06:18.842939 ignition[945]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" May 10 00:06:18.866347 ignition[945]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" May 10 00:06:18.870749 ignition[945]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 10 00:06:18.873001 ignition[945]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" May 10 00:06:18.873001 ignition[945]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" May 10 00:06:18.873001 ignition[945]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" May 10 00:06:18.873001 ignition[945]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" May 10 00:06:18.873001 ignition[945]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" May 10 00:06:18.873001 ignition[945]: INFO : files: files passed May 10 00:06:18.873001 ignition[945]: INFO : Ignition finished successfully May 10 00:06:18.873833 systemd[1]: Finished ignition-files.service - Ignition (files). May 10 00:06:18.892042 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 10 00:06:18.894024 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 10 00:06:18.896448 systemd[1]: ignition-quench.service: Deactivated successfully. May 10 00:06:18.897618 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 10 00:06:18.902862 initrd-setup-root-after-ignition[974]: grep: /sysroot/oem/oem-release: No such file or directory May 10 00:06:18.906339 initrd-setup-root-after-ignition[976]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 10 00:06:18.906339 initrd-setup-root-after-ignition[976]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 10 00:06:18.908796 initrd-setup-root-after-ignition[980]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 10 00:06:18.909778 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 10 00:06:18.911009 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 10 00:06:18.928070 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 10 00:06:18.946690 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 10 00:06:18.946827 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 10 00:06:18.948496 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 10 00:06:18.949809 systemd[1]: Reached target initrd.target - Initrd Default Target. May 10 00:06:18.951157 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 10 00:06:18.951894 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 10 00:06:18.966548 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 10 00:06:18.978061 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 10 00:06:18.985666 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 10 00:06:18.986643 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 10 00:06:18.988145 systemd[1]: Stopped target timers.target - Timer Units. May 10 00:06:18.989509 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 10 00:06:18.989626 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 10 00:06:18.991436 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 10 00:06:18.992899 systemd[1]: Stopped target basic.target - Basic System. May 10 00:06:18.994065 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 10 00:06:18.995296 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 10 00:06:18.996726 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 10 00:06:18.998177 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 10 00:06:18.999561 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 10 00:06:19.000964 systemd[1]: Stopped target sysinit.target - System Initialization. May 10 00:06:19.002422 systemd[1]: Stopped target local-fs.target - Local File Systems. May 10 00:06:19.003654 systemd[1]: Stopped target swap.target - Swaps. May 10 00:06:19.004742 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 10 00:06:19.004869 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 10 00:06:19.006555 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 10 00:06:19.007950 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 10 00:06:19.009359 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 10 00:06:19.009461 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 10 00:06:19.010873 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 10 00:06:19.010981 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 10 00:06:19.013027 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 10 00:06:19.013138 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 10 00:06:19.014519 systemd[1]: Stopped target paths.target - Path Units. May 10 00:06:19.015632 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 10 00:06:19.020885 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 10 00:06:19.021810 systemd[1]: Stopped target slices.target - Slice Units. May 10 00:06:19.023412 systemd[1]: Stopped target sockets.target - Socket Units. May 10 00:06:19.024568 systemd[1]: iscsid.socket: Deactivated successfully. May 10 00:06:19.024655 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 10 00:06:19.025749 systemd[1]: iscsiuio.socket: Deactivated successfully. May 10 00:06:19.025824 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 10 00:06:19.026938 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 10 00:06:19.027039 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 10 00:06:19.028342 systemd[1]: ignition-files.service: Deactivated successfully. May 10 00:06:19.028444 systemd[1]: Stopped ignition-files.service - Ignition (files). May 10 00:06:19.040020 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 10 00:06:19.041369 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 10 00:06:19.042028 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 10 00:06:19.042137 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 10 00:06:19.043473 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 10 00:06:19.043566 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 10 00:06:19.048714 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 10 00:06:19.048817 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 10 00:06:19.051264 ignition[1001]: INFO : Ignition 2.20.0 May 10 00:06:19.051264 ignition[1001]: INFO : Stage: umount May 10 00:06:19.051264 ignition[1001]: INFO : no configs at "/usr/lib/ignition/base.d" May 10 00:06:19.051264 ignition[1001]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 10 00:06:19.055806 ignition[1001]: INFO : umount: umount passed May 10 00:06:19.055806 ignition[1001]: INFO : Ignition finished successfully May 10 00:06:19.053209 systemd[1]: ignition-mount.service: Deactivated successfully. May 10 00:06:19.053296 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 10 00:06:19.056126 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 10 00:06:19.056546 systemd[1]: Stopped target network.target - Network. May 10 00:06:19.057739 systemd[1]: ignition-disks.service: Deactivated successfully. May 10 00:06:19.057794 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 10 00:06:19.059235 systemd[1]: ignition-kargs.service: Deactivated successfully. May 10 00:06:19.059276 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 10 00:06:19.060475 systemd[1]: ignition-setup.service: Deactivated successfully. May 10 00:06:19.060512 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 10 00:06:19.061693 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 10 00:06:19.061732 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 10 00:06:19.063094 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 10 00:06:19.064276 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 10 00:06:19.065881 systemd[1]: sysroot-boot.service: Deactivated successfully. May 10 00:06:19.065963 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 10 00:06:19.067458 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 10 00:06:19.067549 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 10 00:06:19.067903 systemd-networkd[765]: eth0: DHCPv6 lease lost May 10 00:06:19.069617 systemd[1]: systemd-networkd.service: Deactivated successfully. May 10 00:06:19.070926 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 10 00:06:19.072476 systemd[1]: systemd-resolved.service: Deactivated successfully. May 10 00:06:19.072576 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 10 00:06:19.074720 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 10 00:06:19.074776 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 10 00:06:19.080960 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 10 00:06:19.081743 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 10 00:06:19.081808 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 10 00:06:19.083309 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 10 00:06:19.083350 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 10 00:06:19.084614 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 10 00:06:19.084653 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 10 00:06:19.085919 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 10 00:06:19.085955 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 10 00:06:19.087527 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 10 00:06:19.096540 systemd[1]: network-cleanup.service: Deactivated successfully. May 10 00:06:19.096633 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 10 00:06:19.101786 systemd[1]: systemd-udevd.service: Deactivated successfully. May 10 00:06:19.101972 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 10 00:06:19.103656 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 10 00:06:19.103696 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 10 00:06:19.105087 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 10 00:06:19.105117 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 10 00:06:19.106469 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 10 00:06:19.106515 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 10 00:06:19.108597 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 10 00:06:19.108639 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 10 00:06:19.110679 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 10 00:06:19.110729 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 10 00:06:19.122016 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 10 00:06:19.122786 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 10 00:06:19.122863 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 10 00:06:19.124476 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. May 10 00:06:19.124520 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 10 00:06:19.125942 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 10 00:06:19.125978 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 10 00:06:19.127597 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 10 00:06:19.127633 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 10 00:06:19.129325 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 10 00:06:19.129433 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 10 00:06:19.131125 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 10 00:06:19.135051 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 10 00:06:19.143621 systemd[1]: Switching root. May 10 00:06:19.168062 systemd-journald[237]: Journal stopped May 10 00:06:19.867745 systemd-journald[237]: Received SIGTERM from PID 1 (systemd). May 10 00:06:19.867798 kernel: SELinux: policy capability network_peer_controls=1 May 10 00:06:19.867811 kernel: SELinux: policy capability open_perms=1 May 10 00:06:19.867821 kernel: SELinux: policy capability extended_socket_class=1 May 10 00:06:19.867834 kernel: SELinux: policy capability always_check_network=0 May 10 00:06:19.867956 kernel: SELinux: policy capability cgroup_seclabel=1 May 10 00:06:19.867968 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 10 00:06:19.867978 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 10 00:06:19.867988 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 10 00:06:19.867998 kernel: audit: type=1403 audit(1746835579.310:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 10 00:06:19.868009 systemd[1]: Successfully loaded SELinux policy in 30.257ms. May 10 00:06:19.868030 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 8.890ms. May 10 00:06:19.868042 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 10 00:06:19.868057 systemd[1]: Detected virtualization kvm. May 10 00:06:19.868068 systemd[1]: Detected architecture arm64. May 10 00:06:19.868078 systemd[1]: Detected first boot. May 10 00:06:19.868089 systemd[1]: Initializing machine ID from VM UUID. May 10 00:06:19.868099 zram_generator::config[1044]: No configuration found. May 10 00:06:19.868111 systemd[1]: Populated /etc with preset unit settings. May 10 00:06:19.868121 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 10 00:06:19.868131 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 10 00:06:19.868143 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 10 00:06:19.868155 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 10 00:06:19.868166 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 10 00:06:19.868176 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 10 00:06:19.868187 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 10 00:06:19.868198 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 10 00:06:19.868208 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 10 00:06:19.868219 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 10 00:06:19.868229 systemd[1]: Created slice user.slice - User and Session Slice. May 10 00:06:19.868241 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 10 00:06:19.868253 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 10 00:06:19.868263 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 10 00:06:19.868273 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 10 00:06:19.868284 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 10 00:06:19.868295 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 10 00:06:19.868306 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... May 10 00:06:19.868316 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 10 00:06:19.868327 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 10 00:06:19.868339 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 10 00:06:19.868349 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 10 00:06:19.868360 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 10 00:06:19.868370 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 10 00:06:19.868381 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 10 00:06:19.868391 systemd[1]: Reached target slices.target - Slice Units. May 10 00:06:19.868402 systemd[1]: Reached target swap.target - Swaps. May 10 00:06:19.868412 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 10 00:06:19.868424 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 10 00:06:19.868434 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 10 00:06:19.868451 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 10 00:06:19.868464 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 10 00:06:19.868474 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 10 00:06:19.868485 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 10 00:06:19.868495 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 10 00:06:19.868506 systemd[1]: Mounting media.mount - External Media Directory... May 10 00:06:19.868517 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 10 00:06:19.868530 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 10 00:06:19.868541 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 10 00:06:19.868551 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 10 00:06:19.868562 systemd[1]: Reached target machines.target - Containers. May 10 00:06:19.868572 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 10 00:06:19.868583 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 10 00:06:19.868593 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 10 00:06:19.868604 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 10 00:06:19.868616 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 10 00:06:19.868626 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 10 00:06:19.868636 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 10 00:06:19.868646 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 10 00:06:19.868657 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 10 00:06:19.868668 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 10 00:06:19.868678 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 10 00:06:19.868689 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 10 00:06:19.868699 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 10 00:06:19.868711 kernel: fuse: init (API version 7.39) May 10 00:06:19.868721 systemd[1]: Stopped systemd-fsck-usr.service. May 10 00:06:19.868731 kernel: loop: module loaded May 10 00:06:19.868742 systemd[1]: Starting systemd-journald.service - Journal Service... May 10 00:06:19.868753 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 10 00:06:19.868764 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 10 00:06:19.868773 kernel: ACPI: bus type drm_connector registered May 10 00:06:19.868803 systemd-journald[1104]: Collecting audit messages is disabled. May 10 00:06:19.868835 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 10 00:06:19.868855 systemd-journald[1104]: Journal started May 10 00:06:19.868878 systemd-journald[1104]: Runtime Journal (/run/log/journal/e82480c39650407e8b7136a1fcd5abde) is 5.9M, max 47.3M, 41.4M free. May 10 00:06:19.680331 systemd[1]: Queued start job for default target multi-user.target. May 10 00:06:19.699619 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 10 00:06:19.699967 systemd[1]: systemd-journald.service: Deactivated successfully. May 10 00:06:19.876695 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 10 00:06:19.881479 systemd[1]: verity-setup.service: Deactivated successfully. May 10 00:06:19.881543 systemd[1]: Stopped verity-setup.service. May 10 00:06:19.881569 systemd[1]: Started systemd-journald.service - Journal Service. May 10 00:06:19.883194 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 10 00:06:19.884624 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 10 00:06:19.886044 systemd[1]: Mounted media.mount - External Media Directory. May 10 00:06:19.887195 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 10 00:06:19.888548 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 10 00:06:19.889810 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 10 00:06:19.892869 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 10 00:06:19.894333 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 10 00:06:19.895920 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 10 00:06:19.896063 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 10 00:06:19.897596 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 10 00:06:19.897737 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 10 00:06:19.900216 systemd[1]: modprobe@drm.service: Deactivated successfully. May 10 00:06:19.900495 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 10 00:06:19.901582 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 10 00:06:19.901748 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 10 00:06:19.902976 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 10 00:06:19.903136 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 10 00:06:19.904268 systemd[1]: modprobe@loop.service: Deactivated successfully. May 10 00:06:19.904407 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 10 00:06:19.905735 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 10 00:06:19.907060 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 10 00:06:19.908465 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 10 00:06:19.922509 systemd[1]: Reached target network-pre.target - Preparation for Network. May 10 00:06:19.937977 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 10 00:06:19.939960 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 10 00:06:19.940829 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 10 00:06:19.940887 systemd[1]: Reached target local-fs.target - Local File Systems. May 10 00:06:19.942746 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). May 10 00:06:19.944964 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 10 00:06:19.946834 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 10 00:06:19.947723 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 10 00:06:19.949563 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 10 00:06:19.951283 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 10 00:06:19.952351 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 10 00:06:19.954081 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 10 00:06:19.955072 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 10 00:06:19.957233 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 10 00:06:19.960181 systemd-journald[1104]: Time spent on flushing to /var/log/journal/e82480c39650407e8b7136a1fcd5abde is 12.404ms for 857 entries. May 10 00:06:19.960181 systemd-journald[1104]: System Journal (/var/log/journal/e82480c39650407e8b7136a1fcd5abde) is 8.0M, max 195.6M, 187.6M free. May 10 00:06:19.981977 systemd-journald[1104]: Received client request to flush runtime journal. May 10 00:06:19.962222 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 10 00:06:19.965066 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 10 00:06:19.967603 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 10 00:06:19.968784 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 10 00:06:19.969953 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 10 00:06:19.972996 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 10 00:06:19.978779 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 10 00:06:19.983664 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 10 00:06:19.987409 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 10 00:06:19.989166 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 10 00:06:19.993895 kernel: loop0: detected capacity change from 0 to 113536 May 10 00:06:19.998031 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... May 10 00:06:19.999616 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 10 00:06:20.003989 udevadm[1163]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. May 10 00:06:20.013360 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 10 00:06:20.014132 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. May 10 00:06:20.016970 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 10 00:06:20.018860 systemd-tmpfiles[1157]: ACLs are not supported, ignoring. May 10 00:06:20.018877 systemd-tmpfiles[1157]: ACLs are not supported, ignoring. May 10 00:06:20.026942 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 10 00:06:20.035063 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 10 00:06:20.042900 kernel: loop1: detected capacity change from 0 to 194096 May 10 00:06:20.063058 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 10 00:06:20.072076 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 10 00:06:20.086141 systemd-tmpfiles[1180]: ACLs are not supported, ignoring. May 10 00:06:20.086163 systemd-tmpfiles[1180]: ACLs are not supported, ignoring. May 10 00:06:20.090870 kernel: loop2: detected capacity change from 0 to 116808 May 10 00:06:20.093136 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 10 00:06:20.126878 kernel: loop3: detected capacity change from 0 to 113536 May 10 00:06:20.132989 kernel: loop4: detected capacity change from 0 to 194096 May 10 00:06:20.142870 kernel: loop5: detected capacity change from 0 to 116808 May 10 00:06:20.145336 (sd-merge)[1186]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. May 10 00:06:20.145775 (sd-merge)[1186]: Merged extensions into '/usr'. May 10 00:06:20.150716 systemd[1]: Reloading requested from client PID 1155 ('systemd-sysext') (unit systemd-sysext.service)... May 10 00:06:20.150731 systemd[1]: Reloading... May 10 00:06:20.216870 zram_generator::config[1215]: No configuration found. May 10 00:06:20.260931 ldconfig[1150]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 10 00:06:20.312001 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 10 00:06:20.348325 systemd[1]: Reloading finished in 197 ms. May 10 00:06:20.378315 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 10 00:06:20.379758 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 10 00:06:20.392050 systemd[1]: Starting ensure-sysext.service... May 10 00:06:20.394450 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 10 00:06:20.406708 systemd[1]: Reloading requested from client PID 1246 ('systemctl') (unit ensure-sysext.service)... May 10 00:06:20.406861 systemd[1]: Reloading... May 10 00:06:20.415217 systemd-tmpfiles[1247]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 10 00:06:20.415504 systemd-tmpfiles[1247]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 10 00:06:20.416191 systemd-tmpfiles[1247]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 10 00:06:20.416438 systemd-tmpfiles[1247]: ACLs are not supported, ignoring. May 10 00:06:20.416499 systemd-tmpfiles[1247]: ACLs are not supported, ignoring. May 10 00:06:20.422607 systemd-tmpfiles[1247]: Detected autofs mount point /boot during canonicalization of boot. May 10 00:06:20.422620 systemd-tmpfiles[1247]: Skipping /boot May 10 00:06:20.429628 systemd-tmpfiles[1247]: Detected autofs mount point /boot during canonicalization of boot. May 10 00:06:20.429644 systemd-tmpfiles[1247]: Skipping /boot May 10 00:06:20.459882 zram_generator::config[1274]: No configuration found. May 10 00:06:20.541599 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 10 00:06:20.578751 systemd[1]: Reloading finished in 171 ms. May 10 00:06:20.595382 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 10 00:06:20.609299 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 10 00:06:20.618588 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 10 00:06:20.621151 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 10 00:06:20.622507 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 10 00:06:20.623676 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 10 00:06:20.628117 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 10 00:06:20.632132 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 10 00:06:20.633625 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 10 00:06:20.634792 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 10 00:06:20.641628 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 10 00:06:20.644112 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 10 00:06:20.647822 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 10 00:06:20.651000 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 10 00:06:20.652108 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 10 00:06:20.655223 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 10 00:06:20.655389 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 10 00:06:20.657462 systemd[1]: modprobe@loop.service: Deactivated successfully. May 10 00:06:20.657650 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 10 00:06:20.662396 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 10 00:06:20.670761 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 10 00:06:20.681226 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 10 00:06:20.683072 systemd-udevd[1323]: Using default interface naming scheme 'v255'. May 10 00:06:20.683627 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 10 00:06:20.687570 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 10 00:06:20.691097 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 10 00:06:20.693057 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 10 00:06:20.696801 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 10 00:06:20.700529 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 10 00:06:20.700732 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 10 00:06:20.705614 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 10 00:06:20.705764 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 10 00:06:20.707633 systemd[1]: modprobe@loop.service: Deactivated successfully. May 10 00:06:20.707763 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 10 00:06:20.709523 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 10 00:06:20.715792 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 10 00:06:20.717832 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 10 00:06:20.719202 augenrules[1352]: No rules May 10 00:06:20.719588 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 10 00:06:20.721428 systemd[1]: audit-rules.service: Deactivated successfully. May 10 00:06:20.721594 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 10 00:06:20.730081 systemd[1]: Finished ensure-sysext.service. May 10 00:06:20.737372 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 10 00:06:20.749109 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 10 00:06:20.753455 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 10 00:06:20.757530 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 10 00:06:20.761022 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 10 00:06:20.762540 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 10 00:06:20.768948 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 10 00:06:20.777463 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 10 00:06:20.780747 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 10 00:06:20.781046 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 10 00:06:20.782869 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1353) May 10 00:06:20.784624 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 10 00:06:20.784776 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 10 00:06:20.786166 systemd[1]: modprobe@drm.service: Deactivated successfully. May 10 00:06:20.787884 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 10 00:06:20.788992 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 10 00:06:20.789126 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 10 00:06:20.806019 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. May 10 00:06:20.823595 systemd[1]: modprobe@loop.service: Deactivated successfully. May 10 00:06:20.823758 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 10 00:06:20.836777 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 10 00:06:20.844036 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 10 00:06:20.844953 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 10 00:06:20.845039 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 10 00:06:20.860168 systemd-resolved[1322]: Positive Trust Anchors: May 10 00:06:20.864731 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 10 00:06:20.865199 systemd-resolved[1322]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 10 00:06:20.865249 systemd-resolved[1322]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 10 00:06:20.866351 systemd[1]: Reached target time-set.target - System Time Set. May 10 00:06:20.884889 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 10 00:06:20.896028 systemd-resolved[1322]: Defaulting to hostname 'linux'. May 10 00:06:20.901721 systemd-networkd[1386]: lo: Link UP May 10 00:06:20.901731 systemd-networkd[1386]: lo: Gained carrier May 10 00:06:20.902695 systemd-networkd[1386]: Enumeration completed May 10 00:06:20.902830 systemd[1]: Started systemd-networkd.service - Network Configuration. May 10 00:06:20.916261 systemd-networkd[1386]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 10 00:06:20.916271 systemd-networkd[1386]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 10 00:06:20.917042 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 10 00:06:20.918151 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 10 00:06:20.919828 systemd-networkd[1386]: eth0: Link UP May 10 00:06:20.919838 systemd-networkd[1386]: eth0: Gained carrier May 10 00:06:20.919859 systemd-networkd[1386]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 10 00:06:20.920883 systemd[1]: Reached target network.target - Network. May 10 00:06:20.921710 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 10 00:06:20.924698 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 10 00:06:20.933932 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 10 00:06:20.937987 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 10 00:06:20.947069 systemd-networkd[1386]: eth0: DHCPv4 address 10.0.0.141/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 10 00:06:20.947818 systemd-timesyncd[1388]: Network configuration changed, trying to establish connection. May 10 00:06:20.948408 systemd-timesyncd[1388]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 10 00:06:20.948467 systemd-timesyncd[1388]: Initial clock synchronization to Sat 2025-05-10 00:06:20.697396 UTC. May 10 00:06:20.961447 lvm[1411]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 10 00:06:20.984097 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 10 00:06:20.994726 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 10 00:06:20.996104 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 10 00:06:20.997045 systemd[1]: Reached target sysinit.target - System Initialization. May 10 00:06:20.998022 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 10 00:06:20.999016 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 10 00:06:21.000130 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 10 00:06:21.001159 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 10 00:06:21.002200 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 10 00:06:21.003214 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 10 00:06:21.003244 systemd[1]: Reached target paths.target - Path Units. May 10 00:06:21.003963 systemd[1]: Reached target timers.target - Timer Units. May 10 00:06:21.005556 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 10 00:06:21.008186 systemd[1]: Starting docker.socket - Docker Socket for the API... May 10 00:06:21.017834 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 10 00:06:21.020324 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 10 00:06:21.022023 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 10 00:06:21.023171 systemd[1]: Reached target sockets.target - Socket Units. May 10 00:06:21.024146 systemd[1]: Reached target basic.target - Basic System. May 10 00:06:21.025207 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 10 00:06:21.025240 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 10 00:06:21.026274 systemd[1]: Starting containerd.service - containerd container runtime... May 10 00:06:21.028296 lvm[1419]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 10 00:06:21.028701 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 10 00:06:21.031801 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 10 00:06:21.034023 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 10 00:06:21.035158 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 10 00:06:21.037171 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 10 00:06:21.042116 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 10 00:06:21.045177 jq[1422]: false May 10 00:06:21.043944 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 10 00:06:21.046089 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 10 00:06:21.052966 systemd[1]: Starting systemd-logind.service - User Login Management... May 10 00:06:21.054348 extend-filesystems[1423]: Found loop3 May 10 00:06:21.054538 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 10 00:06:21.056541 extend-filesystems[1423]: Found loop4 May 10 00:06:21.056541 extend-filesystems[1423]: Found loop5 May 10 00:06:21.056541 extend-filesystems[1423]: Found vda May 10 00:06:21.056541 extend-filesystems[1423]: Found vda1 May 10 00:06:21.056541 extend-filesystems[1423]: Found vda2 May 10 00:06:21.056541 extend-filesystems[1423]: Found vda3 May 10 00:06:21.056541 extend-filesystems[1423]: Found usr May 10 00:06:21.056541 extend-filesystems[1423]: Found vda4 May 10 00:06:21.056541 extend-filesystems[1423]: Found vda6 May 10 00:06:21.056541 extend-filesystems[1423]: Found vda7 May 10 00:06:21.056541 extend-filesystems[1423]: Found vda9 May 10 00:06:21.056541 extend-filesystems[1423]: Checking size of /dev/vda9 May 10 00:06:21.055018 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 10 00:06:21.060151 dbus-daemon[1421]: [system] SELinux support is enabled May 10 00:06:21.055641 systemd[1]: Starting update-engine.service - Update Engine... May 10 00:06:21.057371 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 10 00:06:21.059555 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 10 00:06:21.063647 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 10 00:06:21.068644 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 10 00:06:21.068854 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 10 00:06:21.070248 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 10 00:06:21.072030 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 10 00:06:21.074206 jq[1433]: true May 10 00:06:21.082001 extend-filesystems[1423]: Resized partition /dev/vda9 May 10 00:06:21.085461 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 10 00:06:21.085494 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 10 00:06:21.086613 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 10 00:06:21.086630 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 10 00:06:21.088564 extend-filesystems[1456]: resize2fs 1.47.1 (20-May-2024) May 10 00:06:21.090379 (ntainerd)[1455]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 10 00:06:21.096845 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 10 00:06:21.101093 jq[1446]: true May 10 00:06:21.104353 tar[1439]: linux-arm64/helm May 10 00:06:21.109798 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1354) May 10 00:06:21.108799 systemd[1]: motdgen.service: Deactivated successfully. May 10 00:06:21.109006 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 10 00:06:21.113264 update_engine[1432]: I20250510 00:06:21.113122 1432 main.cc:92] Flatcar Update Engine starting May 10 00:06:21.115153 systemd[1]: Started update-engine.service - Update Engine. May 10 00:06:21.115333 update_engine[1432]: I20250510 00:06:21.115296 1432 update_check_scheduler.cc:74] Next update check in 6m16s May 10 00:06:21.119096 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 10 00:06:21.132864 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 10 00:06:21.144701 systemd-logind[1429]: Watching system buttons on /dev/input/event0 (Power Button) May 10 00:06:21.145098 systemd-logind[1429]: New seat seat0. May 10 00:06:21.163171 extend-filesystems[1456]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 10 00:06:21.163171 extend-filesystems[1456]: old_desc_blocks = 1, new_desc_blocks = 1 May 10 00:06:21.163171 extend-filesystems[1456]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 10 00:06:21.163040 systemd[1]: Started systemd-logind.service - User Login Management. May 10 00:06:21.175216 extend-filesystems[1423]: Resized filesystem in /dev/vda9 May 10 00:06:21.164263 systemd[1]: extend-filesystems.service: Deactivated successfully. May 10 00:06:21.164447 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 10 00:06:21.214791 locksmithd[1460]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 10 00:06:21.218528 bash[1477]: Updated "/home/core/.ssh/authorized_keys" May 10 00:06:21.222866 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 10 00:06:21.224368 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 10 00:06:21.303720 containerd[1455]: time="2025-05-10T00:06:21.303627889Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 May 10 00:06:21.329181 containerd[1455]: time="2025-05-10T00:06:21.329075750Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 10 00:06:21.330974 containerd[1455]: time="2025-05-10T00:06:21.330377903Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.89-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 10 00:06:21.330974 containerd[1455]: time="2025-05-10T00:06:21.330416067Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 10 00:06:21.330974 containerd[1455]: time="2025-05-10T00:06:21.330432999Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 10 00:06:21.330974 containerd[1455]: time="2025-05-10T00:06:21.330591467Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 May 10 00:06:21.330974 containerd[1455]: time="2025-05-10T00:06:21.330608632Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 May 10 00:06:21.330974 containerd[1455]: time="2025-05-10T00:06:21.330660550Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 May 10 00:06:21.330974 containerd[1455]: time="2025-05-10T00:06:21.330672368Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 10 00:06:21.330974 containerd[1455]: time="2025-05-10T00:06:21.330836338Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 10 00:06:21.330974 containerd[1455]: time="2025-05-10T00:06:21.330862491Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 10 00:06:21.330974 containerd[1455]: time="2025-05-10T00:06:21.330874347Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 May 10 00:06:21.330974 containerd[1455]: time="2025-05-10T00:06:21.330882677Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 10 00:06:21.331256 containerd[1455]: time="2025-05-10T00:06:21.330950365Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 10 00:06:21.331256 containerd[1455]: time="2025-05-10T00:06:21.331128090Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 10 00:06:21.331256 containerd[1455]: time="2025-05-10T00:06:21.331216701Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 10 00:06:21.331256 containerd[1455]: time="2025-05-10T00:06:21.331229254Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 10 00:06:21.331324 containerd[1455]: time="2025-05-10T00:06:21.331298144Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 10 00:06:21.331385 containerd[1455]: time="2025-05-10T00:06:21.331342391Z" level=info msg="metadata content store policy set" policy=shared May 10 00:06:21.334544 containerd[1455]: time="2025-05-10T00:06:21.334518462Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 10 00:06:21.334615 containerd[1455]: time="2025-05-10T00:06:21.334565732Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 10 00:06:21.334615 containerd[1455]: time="2025-05-10T00:06:21.334581540Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 May 10 00:06:21.334615 containerd[1455]: time="2025-05-10T00:06:21.334596573Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 May 10 00:06:21.334615 containerd[1455]: time="2025-05-10T00:06:21.334610444Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 10 00:06:21.334765 containerd[1455]: time="2025-05-10T00:06:21.334747564Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 10 00:06:21.335002 containerd[1455]: time="2025-05-10T00:06:21.334981973Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 10 00:06:21.335094 containerd[1455]: time="2025-05-10T00:06:21.335078255Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 May 10 00:06:21.335118 containerd[1455]: time="2025-05-10T00:06:21.335097821Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 May 10 00:06:21.335118 containerd[1455]: time="2025-05-10T00:06:21.335112390Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 May 10 00:06:21.335155 containerd[1455]: time="2025-05-10T00:06:21.335124943Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 10 00:06:21.335155 containerd[1455]: time="2025-05-10T00:06:21.335136915Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 10 00:06:21.335155 containerd[1455]: time="2025-05-10T00:06:21.335148384Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 10 00:06:21.335205 containerd[1455]: time="2025-05-10T00:06:21.335160666Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 10 00:06:21.335205 containerd[1455]: time="2025-05-10T00:06:21.335174033Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 10 00:06:21.335205 containerd[1455]: time="2025-05-10T00:06:21.335185386Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 10 00:06:21.335205 containerd[1455]: time="2025-05-10T00:06:21.335197009Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 10 00:06:21.335266 containerd[1455]: time="2025-05-10T00:06:21.335207819Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 10 00:06:21.335266 containerd[1455]: time="2025-05-10T00:06:21.335226727Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 10 00:06:21.335266 containerd[1455]: time="2025-05-10T00:06:21.335238854Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 10 00:06:21.335266 containerd[1455]: time="2025-05-10T00:06:21.335249936Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 10 00:06:21.335266 containerd[1455]: time="2025-05-10T00:06:21.335262295Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 10 00:06:21.335350 containerd[1455]: time="2025-05-10T00:06:21.335273415Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 10 00:06:21.335350 containerd[1455]: time="2025-05-10T00:06:21.335285155Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 10 00:06:21.335350 containerd[1455]: time="2025-05-10T00:06:21.335295461Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 10 00:06:21.335350 containerd[1455]: time="2025-05-10T00:06:21.335310805Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 10 00:06:21.335350 containerd[1455]: time="2025-05-10T00:06:21.335322506Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 May 10 00:06:21.335350 containerd[1455]: time="2025-05-10T00:06:21.335335098Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 May 10 00:06:21.335350 containerd[1455]: time="2025-05-10T00:06:21.335349821Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 10 00:06:21.335459 containerd[1455]: time="2025-05-10T00:06:21.335362607Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 May 10 00:06:21.335459 containerd[1455]: time="2025-05-10T00:06:21.335374463Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 10 00:06:21.335459 containerd[1455]: time="2025-05-10T00:06:21.335387985Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 May 10 00:06:21.335459 containerd[1455]: time="2025-05-10T00:06:21.335405421Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 May 10 00:06:21.335459 containerd[1455]: time="2025-05-10T00:06:21.335417432Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 10 00:06:21.335459 containerd[1455]: time="2025-05-10T00:06:21.335427118Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 10 00:06:21.335649 containerd[1455]: time="2025-05-10T00:06:21.335581092Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 10 00:06:21.335649 containerd[1455]: time="2025-05-10T00:06:21.335601085Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 May 10 00:06:21.335649 containerd[1455]: time="2025-05-10T00:06:21.335623208Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 10 00:06:21.335649 containerd[1455]: time="2025-05-10T00:06:21.335634677Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 May 10 00:06:21.335649 containerd[1455]: time="2025-05-10T00:06:21.335643007Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 10 00:06:21.335649 containerd[1455]: time="2025-05-10T00:06:21.335653468Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 May 10 00:06:21.335764 containerd[1455]: time="2025-05-10T00:06:21.335662806Z" level=info msg="NRI interface is disabled by configuration." May 10 00:06:21.335764 containerd[1455]: time="2025-05-10T00:06:21.335672957Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 10 00:06:21.336057 containerd[1455]: time="2025-05-10T00:06:21.336007252Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 10 00:06:21.336057 containerd[1455]: time="2025-05-10T00:06:21.336056730Z" level=info msg="Connect containerd service" May 10 00:06:21.336191 containerd[1455]: time="2025-05-10T00:06:21.336089198Z" level=info msg="using legacy CRI server" May 10 00:06:21.336191 containerd[1455]: time="2025-05-10T00:06:21.336097219Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 10 00:06:21.336347 containerd[1455]: time="2025-05-10T00:06:21.336331512Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 10 00:06:21.337098 containerd[1455]: time="2025-05-10T00:06:21.337071935Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 10 00:06:21.337326 containerd[1455]: time="2025-05-10T00:06:21.337277402Z" level=info msg="Start subscribing containerd event" May 10 00:06:21.337326 containerd[1455]: time="2025-05-10T00:06:21.337321881Z" level=info msg="Start recovering state" May 10 00:06:21.337390 containerd[1455]: time="2025-05-10T00:06:21.337377248Z" level=info msg="Start event monitor" May 10 00:06:21.337413 containerd[1455]: time="2025-05-10T00:06:21.337391700Z" level=info msg="Start snapshots syncer" May 10 00:06:21.337413 containerd[1455]: time="2025-05-10T00:06:21.337401154Z" level=info msg="Start cni network conf syncer for default" May 10 00:06:21.337413 containerd[1455]: time="2025-05-10T00:06:21.337408981Z" level=info msg="Start streaming server" May 10 00:06:21.338088 containerd[1455]: time="2025-05-10T00:06:21.338067574Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 10 00:06:21.338123 containerd[1455]: time="2025-05-10T00:06:21.338113526Z" level=info msg=serving... address=/run/containerd/containerd.sock May 10 00:06:21.341869 systemd[1]: Started containerd.service - containerd container runtime. May 10 00:06:21.343161 containerd[1455]: time="2025-05-10T00:06:21.343046970Z" level=info msg="containerd successfully booted in 0.040261s" May 10 00:06:21.473267 tar[1439]: linux-arm64/LICENSE May 10 00:06:21.473398 tar[1439]: linux-arm64/README.md May 10 00:06:21.485272 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 10 00:06:22.073275 sshd_keygen[1440]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 10 00:06:22.092811 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 10 00:06:22.108182 systemd[1]: Starting issuegen.service - Generate /run/issue... May 10 00:06:22.114895 systemd[1]: issuegen.service: Deactivated successfully. May 10 00:06:22.116870 systemd[1]: Finished issuegen.service - Generate /run/issue. May 10 00:06:22.119417 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 10 00:06:22.134122 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 10 00:06:22.136773 systemd[1]: Started getty@tty1.service - Getty on tty1. May 10 00:06:22.138925 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. May 10 00:06:22.139975 systemd[1]: Reached target getty.target - Login Prompts. May 10 00:06:22.645970 systemd-networkd[1386]: eth0: Gained IPv6LL May 10 00:06:22.649894 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 10 00:06:22.651282 systemd[1]: Reached target network-online.target - Network is Online. May 10 00:06:22.663172 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... May 10 00:06:22.665636 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 10 00:06:22.667781 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 10 00:06:22.684112 systemd[1]: coreos-metadata.service: Deactivated successfully. May 10 00:06:22.684919 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. May 10 00:06:22.686289 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 10 00:06:22.690787 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 10 00:06:23.160614 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 10 00:06:23.162009 systemd[1]: Reached target multi-user.target - Multi-User System. May 10 00:06:23.163102 systemd[1]: Startup finished in 568ms (kernel) + 4.603s (initrd) + 3.888s (userspace) = 9.060s. May 10 00:06:23.165127 (kubelet)[1534]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 10 00:06:23.634737 kubelet[1534]: E0510 00:06:23.634618 1534 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 10 00:06:23.637482 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 10 00:06:23.637627 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 10 00:06:27.270779 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 10 00:06:27.271992 systemd[1]: Started sshd@0-10.0.0.141:22-10.0.0.1:35466.service - OpenSSH per-connection server daemon (10.0.0.1:35466). May 10 00:06:27.336520 sshd[1549]: Accepted publickey for core from 10.0.0.1 port 35466 ssh2: RSA SHA256:Q9AEy6fzrZ3SovUIgabQR390giiaQRil/rHIVS1a70c May 10 00:06:27.338537 sshd-session[1549]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:06:27.350071 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 10 00:06:27.363371 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 10 00:06:27.365319 systemd-logind[1429]: New session 1 of user core. May 10 00:06:27.373320 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 10 00:06:27.376901 systemd[1]: Starting user@500.service - User Manager for UID 500... May 10 00:06:27.383759 (systemd)[1553]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 10 00:06:27.459147 systemd[1553]: Queued start job for default target default.target. May 10 00:06:27.474064 systemd[1553]: Created slice app.slice - User Application Slice. May 10 00:06:27.474091 systemd[1553]: Reached target paths.target - Paths. May 10 00:06:27.474103 systemd[1553]: Reached target timers.target - Timers. May 10 00:06:27.475536 systemd[1553]: Starting dbus.socket - D-Bus User Message Bus Socket... May 10 00:06:27.485764 systemd[1553]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 10 00:06:27.485860 systemd[1553]: Reached target sockets.target - Sockets. May 10 00:06:27.485875 systemd[1553]: Reached target basic.target - Basic System. May 10 00:06:27.485915 systemd[1553]: Reached target default.target - Main User Target. May 10 00:06:27.485941 systemd[1553]: Startup finished in 96ms. May 10 00:06:27.486136 systemd[1]: Started user@500.service - User Manager for UID 500. May 10 00:06:27.487494 systemd[1]: Started session-1.scope - Session 1 of User core. May 10 00:06:27.544671 systemd[1]: Started sshd@1-10.0.0.141:22-10.0.0.1:35468.service - OpenSSH per-connection server daemon (10.0.0.1:35468). May 10 00:06:27.611106 sshd[1564]: Accepted publickey for core from 10.0.0.1 port 35468 ssh2: RSA SHA256:Q9AEy6fzrZ3SovUIgabQR390giiaQRil/rHIVS1a70c May 10 00:06:27.612367 sshd-session[1564]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:06:27.616213 systemd-logind[1429]: New session 2 of user core. May 10 00:06:27.630059 systemd[1]: Started session-2.scope - Session 2 of User core. May 10 00:06:27.681177 sshd[1566]: Connection closed by 10.0.0.1 port 35468 May 10 00:06:27.681520 sshd-session[1564]: pam_unix(sshd:session): session closed for user core May 10 00:06:27.689264 systemd[1]: sshd@1-10.0.0.141:22-10.0.0.1:35468.service: Deactivated successfully. May 10 00:06:27.690883 systemd[1]: session-2.scope: Deactivated successfully. May 10 00:06:27.692080 systemd-logind[1429]: Session 2 logged out. Waiting for processes to exit. May 10 00:06:27.706158 systemd[1]: Started sshd@2-10.0.0.141:22-10.0.0.1:35482.service - OpenSSH per-connection server daemon (10.0.0.1:35482). May 10 00:06:27.707018 systemd-logind[1429]: Removed session 2. May 10 00:06:27.743959 sshd[1571]: Accepted publickey for core from 10.0.0.1 port 35482 ssh2: RSA SHA256:Q9AEy6fzrZ3SovUIgabQR390giiaQRil/rHIVS1a70c May 10 00:06:27.745296 sshd-session[1571]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:06:27.749031 systemd-logind[1429]: New session 3 of user core. May 10 00:06:27.760027 systemd[1]: Started session-3.scope - Session 3 of User core. May 10 00:06:27.809640 sshd[1573]: Connection closed by 10.0.0.1 port 35482 May 10 00:06:27.809455 sshd-session[1571]: pam_unix(sshd:session): session closed for user core May 10 00:06:27.823368 systemd[1]: sshd@2-10.0.0.141:22-10.0.0.1:35482.service: Deactivated successfully. May 10 00:06:27.824991 systemd[1]: session-3.scope: Deactivated successfully. May 10 00:06:27.827931 systemd-logind[1429]: Session 3 logged out. Waiting for processes to exit. May 10 00:06:27.841157 systemd[1]: Started sshd@3-10.0.0.141:22-10.0.0.1:35490.service - OpenSSH per-connection server daemon (10.0.0.1:35490). May 10 00:06:27.842327 systemd-logind[1429]: Removed session 3. May 10 00:06:27.881228 sshd[1578]: Accepted publickey for core from 10.0.0.1 port 35490 ssh2: RSA SHA256:Q9AEy6fzrZ3SovUIgabQR390giiaQRil/rHIVS1a70c May 10 00:06:27.882589 sshd-session[1578]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:06:27.886170 systemd-logind[1429]: New session 4 of user core. May 10 00:06:27.895039 systemd[1]: Started session-4.scope - Session 4 of User core. May 10 00:06:27.950782 sshd[1580]: Connection closed by 10.0.0.1 port 35490 May 10 00:06:27.951329 sshd-session[1578]: pam_unix(sshd:session): session closed for user core May 10 00:06:27.962602 systemd[1]: sshd@3-10.0.0.141:22-10.0.0.1:35490.service: Deactivated successfully. May 10 00:06:27.964400 systemd[1]: session-4.scope: Deactivated successfully. May 10 00:06:27.965739 systemd-logind[1429]: Session 4 logged out. Waiting for processes to exit. May 10 00:06:27.978166 systemd[1]: Started sshd@4-10.0.0.141:22-10.0.0.1:35502.service - OpenSSH per-connection server daemon (10.0.0.1:35502). May 10 00:06:27.979317 systemd-logind[1429]: Removed session 4. May 10 00:06:28.016855 sshd[1585]: Accepted publickey for core from 10.0.0.1 port 35502 ssh2: RSA SHA256:Q9AEy6fzrZ3SovUIgabQR390giiaQRil/rHIVS1a70c May 10 00:06:28.018252 sshd-session[1585]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:06:28.021898 systemd-logind[1429]: New session 5 of user core. May 10 00:06:28.029042 systemd[1]: Started session-5.scope - Session 5 of User core. May 10 00:06:28.091563 sudo[1588]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 10 00:06:28.091880 sudo[1588]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 10 00:06:28.112736 sudo[1588]: pam_unix(sudo:session): session closed for user root May 10 00:06:28.114573 sshd[1587]: Connection closed by 10.0.0.1 port 35502 May 10 00:06:28.115374 sshd-session[1585]: pam_unix(sshd:session): session closed for user core May 10 00:06:28.124440 systemd[1]: sshd@4-10.0.0.141:22-10.0.0.1:35502.service: Deactivated successfully. May 10 00:06:28.126003 systemd[1]: session-5.scope: Deactivated successfully. May 10 00:06:28.127896 systemd-logind[1429]: Session 5 logged out. Waiting for processes to exit. May 10 00:06:28.129128 systemd[1]: Started sshd@5-10.0.0.141:22-10.0.0.1:35518.service - OpenSSH per-connection server daemon (10.0.0.1:35518). May 10 00:06:28.129915 systemd-logind[1429]: Removed session 5. May 10 00:06:28.171661 sshd[1593]: Accepted publickey for core from 10.0.0.1 port 35518 ssh2: RSA SHA256:Q9AEy6fzrZ3SovUIgabQR390giiaQRil/rHIVS1a70c May 10 00:06:28.173093 sshd-session[1593]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:06:28.176918 systemd-logind[1429]: New session 6 of user core. May 10 00:06:28.190044 systemd[1]: Started session-6.scope - Session 6 of User core. May 10 00:06:28.240338 sudo[1597]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 10 00:06:28.240616 sudo[1597]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 10 00:06:28.244369 sudo[1597]: pam_unix(sudo:session): session closed for user root May 10 00:06:28.250412 sudo[1596]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 10 00:06:28.250695 sudo[1596]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 10 00:06:28.273424 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 10 00:06:28.299011 augenrules[1619]: No rules May 10 00:06:28.300300 systemd[1]: audit-rules.service: Deactivated successfully. May 10 00:06:28.301895 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 10 00:06:28.303031 sudo[1596]: pam_unix(sudo:session): session closed for user root May 10 00:06:28.304874 sshd[1595]: Connection closed by 10.0.0.1 port 35518 May 10 00:06:28.305357 sshd-session[1593]: pam_unix(sshd:session): session closed for user core May 10 00:06:28.316680 systemd[1]: sshd@5-10.0.0.141:22-10.0.0.1:35518.service: Deactivated successfully. May 10 00:06:28.318374 systemd[1]: session-6.scope: Deactivated successfully. May 10 00:06:28.319729 systemd-logind[1429]: Session 6 logged out. Waiting for processes to exit. May 10 00:06:28.333194 systemd[1]: Started sshd@6-10.0.0.141:22-10.0.0.1:35522.service - OpenSSH per-connection server daemon (10.0.0.1:35522). May 10 00:06:28.334089 systemd-logind[1429]: Removed session 6. May 10 00:06:28.372519 sshd[1627]: Accepted publickey for core from 10.0.0.1 port 35522 ssh2: RSA SHA256:Q9AEy6fzrZ3SovUIgabQR390giiaQRil/rHIVS1a70c May 10 00:06:28.373500 sshd-session[1627]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:06:28.377760 systemd-logind[1429]: New session 7 of user core. May 10 00:06:28.390031 systemd[1]: Started session-7.scope - Session 7 of User core. May 10 00:06:28.440119 sudo[1630]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 10 00:06:28.440403 sudo[1630]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 10 00:06:28.754115 systemd[1]: Starting docker.service - Docker Application Container Engine... May 10 00:06:28.754211 (dockerd)[1651]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 10 00:06:29.032095 dockerd[1651]: time="2025-05-10T00:06:29.031913728Z" level=info msg="Starting up" May 10 00:06:29.191554 dockerd[1651]: time="2025-05-10T00:06:29.191515743Z" level=info msg="Loading containers: start." May 10 00:06:29.343874 kernel: Initializing XFRM netlink socket May 10 00:06:29.415931 systemd-networkd[1386]: docker0: Link UP May 10 00:06:29.459298 dockerd[1651]: time="2025-05-10T00:06:29.459240018Z" level=info msg="Loading containers: done." May 10 00:06:29.471071 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1233422971-merged.mount: Deactivated successfully. May 10 00:06:29.472692 dockerd[1651]: time="2025-05-10T00:06:29.472644810Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 10 00:06:29.472772 dockerd[1651]: time="2025-05-10T00:06:29.472759480Z" level=info msg="Docker daemon" commit=8b539b8df24032dabeaaa099cf1d0535ef0286a3 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1 May 10 00:06:29.472926 dockerd[1651]: time="2025-05-10T00:06:29.472910040Z" level=info msg="Daemon has completed initialization" May 10 00:06:29.506433 dockerd[1651]: time="2025-05-10T00:06:29.506284694Z" level=info msg="API listen on /run/docker.sock" May 10 00:06:29.506540 systemd[1]: Started docker.service - Docker Application Container Engine. May 10 00:06:30.282166 containerd[1455]: time="2025-05-10T00:06:30.282122532Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\"" May 10 00:06:30.865547 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount410849896.mount: Deactivated successfully. May 10 00:06:32.288547 containerd[1455]: time="2025-05-10T00:06:32.288497192Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:06:32.289928 containerd[1455]: time="2025-05-10T00:06:32.289873066Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.12: active requests=0, bytes read=29794152" May 10 00:06:32.293475 containerd[1455]: time="2025-05-10T00:06:32.293418084Z" level=info msg="ImageCreate event name:\"sha256:afbe230ec4abc2c9e87f7fbe7814bde21dbe30f03252c8861c4ca9510cb43ec6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:06:32.296586 containerd[1455]: time="2025-05-10T00:06:32.296533824Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:06:32.298377 containerd[1455]: time="2025-05-10T00:06:32.297966286Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.12\" with image id \"sha256:afbe230ec4abc2c9e87f7fbe7814bde21dbe30f03252c8861c4ca9510cb43ec6\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\", size \"29790950\" in 2.015798677s" May 10 00:06:32.298377 containerd[1455]: time="2025-05-10T00:06:32.298013184Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\" returns image reference \"sha256:afbe230ec4abc2c9e87f7fbe7814bde21dbe30f03252c8861c4ca9510cb43ec6\"" May 10 00:06:32.318192 containerd[1455]: time="2025-05-10T00:06:32.318142587Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\"" May 10 00:06:33.718856 containerd[1455]: time="2025-05-10T00:06:33.718789036Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:06:33.719376 containerd[1455]: time="2025-05-10T00:06:33.719333773Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.12: active requests=0, bytes read=26855552" May 10 00:06:33.720171 containerd[1455]: time="2025-05-10T00:06:33.720137185Z" level=info msg="ImageCreate event name:\"sha256:3df23260c56ff58d759f8a841c67846184e97ce81a269549ca8d14b36da14c14\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:06:33.723466 containerd[1455]: time="2025-05-10T00:06:33.723423808Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:06:33.724812 containerd[1455]: time="2025-05-10T00:06:33.724711382Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.12\" with image id \"sha256:3df23260c56ff58d759f8a841c67846184e97ce81a269549ca8d14b36da14c14\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\", size \"28297111\" in 1.406517237s" May 10 00:06:33.724812 containerd[1455]: time="2025-05-10T00:06:33.724752719Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\" returns image reference \"sha256:3df23260c56ff58d759f8a841c67846184e97ce81a269549ca8d14b36da14c14\"" May 10 00:06:33.745025 containerd[1455]: time="2025-05-10T00:06:33.744957459Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\"" May 10 00:06:33.887917 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 10 00:06:33.900062 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 10 00:06:34.011587 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 10 00:06:34.016379 (kubelet)[1932]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 10 00:06:34.063418 kubelet[1932]: E0510 00:06:34.063354 1932 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 10 00:06:34.066464 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 10 00:06:34.066619 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 10 00:06:34.843454 containerd[1455]: time="2025-05-10T00:06:34.843400928Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:06:34.843941 containerd[1455]: time="2025-05-10T00:06:34.843897170Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.12: active requests=0, bytes read=16263947" May 10 00:06:34.844827 containerd[1455]: time="2025-05-10T00:06:34.844801901Z" level=info msg="ImageCreate event name:\"sha256:fb0f5dac5fa74463b801d11598454c00462609b582d17052195012e5f682c2ba\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:06:34.847693 containerd[1455]: time="2025-05-10T00:06:34.847657030Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:06:34.848965 containerd[1455]: time="2025-05-10T00:06:34.848917622Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.12\" with image id \"sha256:fb0f5dac5fa74463b801d11598454c00462609b582d17052195012e5f682c2ba\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\", size \"17705524\" in 1.103917944s" May 10 00:06:34.848965 containerd[1455]: time="2025-05-10T00:06:34.848949207Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\" returns image reference \"sha256:fb0f5dac5fa74463b801d11598454c00462609b582d17052195012e5f682c2ba\"" May 10 00:06:34.868330 containerd[1455]: time="2025-05-10T00:06:34.868294113Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\"" May 10 00:06:35.857830 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2677004302.mount: Deactivated successfully. May 10 00:06:36.058631 containerd[1455]: time="2025-05-10T00:06:36.058579529Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:06:36.060562 containerd[1455]: time="2025-05-10T00:06:36.060482122Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.12: active requests=0, bytes read=25775707" May 10 00:06:36.061190 containerd[1455]: time="2025-05-10T00:06:36.061159881Z" level=info msg="ImageCreate event name:\"sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:06:36.065389 containerd[1455]: time="2025-05-10T00:06:36.064206811Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:06:36.065389 containerd[1455]: time="2025-05-10T00:06:36.064988887Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.12\" with image id \"sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed\", repo tag \"registry.k8s.io/kube-proxy:v1.30.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\", size \"25774724\" in 1.196652319s" May 10 00:06:36.065389 containerd[1455]: time="2025-05-10T00:06:36.065018600Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\" returns image reference \"sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed\"" May 10 00:06:36.088062 containerd[1455]: time="2025-05-10T00:06:36.088023102Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" May 10 00:06:36.665613 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3136103251.mount: Deactivated successfully. May 10 00:06:37.389530 containerd[1455]: time="2025-05-10T00:06:37.389452029Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:06:37.390217 containerd[1455]: time="2025-05-10T00:06:37.390166533Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485383" May 10 00:06:37.391874 containerd[1455]: time="2025-05-10T00:06:37.391813092Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:06:37.394606 containerd[1455]: time="2025-05-10T00:06:37.394568729Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:06:37.395916 containerd[1455]: time="2025-05-10T00:06:37.395876468Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.307645007s" May 10 00:06:37.395916 containerd[1455]: time="2025-05-10T00:06:37.395910263Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" May 10 00:06:37.425592 containerd[1455]: time="2025-05-10T00:06:37.425545066Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" May 10 00:06:37.957004 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1682812629.mount: Deactivated successfully. May 10 00:06:37.962360 containerd[1455]: time="2025-05-10T00:06:37.962303230Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:06:37.963141 containerd[1455]: time="2025-05-10T00:06:37.963057187Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268823" May 10 00:06:37.963881 containerd[1455]: time="2025-05-10T00:06:37.963805725Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:06:37.966347 containerd[1455]: time="2025-05-10T00:06:37.966309697Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:06:37.967492 containerd[1455]: time="2025-05-10T00:06:37.967452728Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 541.865419ms" May 10 00:06:37.967539 containerd[1455]: time="2025-05-10T00:06:37.967489910Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" May 10 00:06:37.986681 containerd[1455]: time="2025-05-10T00:06:37.986590471Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" May 10 00:06:38.500198 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1395958686.mount: Deactivated successfully. May 10 00:06:40.594258 containerd[1455]: time="2025-05-10T00:06:40.594193467Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:06:40.595092 containerd[1455]: time="2025-05-10T00:06:40.595055602Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=66191474" May 10 00:06:40.595940 containerd[1455]: time="2025-05-10T00:06:40.595906965Z" level=info msg="ImageCreate event name:\"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:06:40.599066 containerd[1455]: time="2025-05-10T00:06:40.599025529Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:06:40.601490 containerd[1455]: time="2025-05-10T00:06:40.601451695Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"66189079\" in 2.614825905s" May 10 00:06:40.601550 containerd[1455]: time="2025-05-10T00:06:40.601490239Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\"" May 10 00:06:44.188156 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 10 00:06:44.199030 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 10 00:06:44.305415 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 10 00:06:44.310775 (kubelet)[2158]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 10 00:06:44.352920 kubelet[2158]: E0510 00:06:44.352866 2158 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 10 00:06:44.355687 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 10 00:06:44.355989 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 10 00:06:46.934714 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 10 00:06:46.943088 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 10 00:06:46.959228 systemd[1]: Reloading requested from client PID 2173 ('systemctl') (unit session-7.scope)... May 10 00:06:46.959243 systemd[1]: Reloading... May 10 00:06:47.014872 zram_generator::config[2210]: No configuration found. May 10 00:06:47.136726 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 10 00:06:47.189651 systemd[1]: Reloading finished in 230 ms. May 10 00:06:47.235801 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 10 00:06:47.238459 systemd[1]: kubelet.service: Deactivated successfully. May 10 00:06:47.238670 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 10 00:06:47.240304 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 10 00:06:47.337292 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 10 00:06:47.342057 (kubelet)[2259]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 10 00:06:47.384228 kubelet[2259]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 10 00:06:47.384228 kubelet[2259]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 10 00:06:47.384228 kubelet[2259]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 10 00:06:47.385244 kubelet[2259]: I0510 00:06:47.385195 2259 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 10 00:06:48.563598 kubelet[2259]: I0510 00:06:48.563563 2259 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 10 00:06:48.563598 kubelet[2259]: I0510 00:06:48.563591 2259 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 10 00:06:48.563959 kubelet[2259]: I0510 00:06:48.563781 2259 server.go:927] "Client rotation is on, will bootstrap in background" May 10 00:06:48.603422 kubelet[2259]: I0510 00:06:48.603381 2259 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 10 00:06:48.603492 kubelet[2259]: E0510 00:06:48.603423 2259 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.141:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.141:6443: connect: connection refused May 10 00:06:48.613748 kubelet[2259]: I0510 00:06:48.613718 2259 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 10 00:06:48.614915 kubelet[2259]: I0510 00:06:48.614865 2259 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 10 00:06:48.615076 kubelet[2259]: I0510 00:06:48.614910 2259 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 10 00:06:48.615164 kubelet[2259]: I0510 00:06:48.615137 2259 topology_manager.go:138] "Creating topology manager with none policy" May 10 00:06:48.615164 kubelet[2259]: I0510 00:06:48.615146 2259 container_manager_linux.go:301] "Creating device plugin manager" May 10 00:06:48.615411 kubelet[2259]: I0510 00:06:48.615386 2259 state_mem.go:36] "Initialized new in-memory state store" May 10 00:06:48.616269 kubelet[2259]: I0510 00:06:48.616248 2259 kubelet.go:400] "Attempting to sync node with API server" May 10 00:06:48.616304 kubelet[2259]: I0510 00:06:48.616272 2259 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 10 00:06:48.616871 kubelet[2259]: I0510 00:06:48.616588 2259 kubelet.go:312] "Adding apiserver pod source" May 10 00:06:48.616871 kubelet[2259]: I0510 00:06:48.616764 2259 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 10 00:06:48.617358 kubelet[2259]: W0510 00:06:48.617036 2259 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.141:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.141:6443: connect: connection refused May 10 00:06:48.617358 kubelet[2259]: E0510 00:06:48.617085 2259 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.141:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.141:6443: connect: connection refused May 10 00:06:48.617358 kubelet[2259]: W0510 00:06:48.617196 2259 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.141:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.141:6443: connect: connection refused May 10 00:06:48.617358 kubelet[2259]: E0510 00:06:48.617238 2259 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.141:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.141:6443: connect: connection refused May 10 00:06:48.617835 kubelet[2259]: I0510 00:06:48.617804 2259 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" May 10 00:06:48.618193 kubelet[2259]: I0510 00:06:48.618171 2259 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 10 00:06:48.618285 kubelet[2259]: W0510 00:06:48.618273 2259 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 10 00:06:48.619119 kubelet[2259]: I0510 00:06:48.619014 2259 server.go:1264] "Started kubelet" May 10 00:06:48.619501 kubelet[2259]: I0510 00:06:48.619453 2259 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 10 00:06:48.620574 kubelet[2259]: I0510 00:06:48.620419 2259 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 10 00:06:48.620574 kubelet[2259]: I0510 00:06:48.620078 2259 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 10 00:06:48.621316 kubelet[2259]: I0510 00:06:48.621286 2259 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 10 00:06:48.622710 kubelet[2259]: I0510 00:06:48.622617 2259 server.go:455] "Adding debug handlers to kubelet server" May 10 00:06:48.624667 kubelet[2259]: I0510 00:06:48.623496 2259 volume_manager.go:291] "Starting Kubelet Volume Manager" May 10 00:06:48.624667 kubelet[2259]: I0510 00:06:48.623604 2259 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 10 00:06:48.624667 kubelet[2259]: I0510 00:06:48.624509 2259 reconciler.go:26] "Reconciler: start to sync state" May 10 00:06:48.624880 kubelet[2259]: W0510 00:06:48.624766 2259 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.141:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.141:6443: connect: connection refused May 10 00:06:48.625450 kubelet[2259]: E0510 00:06:48.625386 2259 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.141:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.141:6443: connect: connection refused May 10 00:06:48.625713 kubelet[2259]: E0510 00:06:48.625459 2259 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.141:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.141:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183e01bb1cbd066c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-10 00:06:48.61899326 +0000 UTC m=+1.272748978,LastTimestamp:2025-05-10 00:06:48.61899326 +0000 UTC m=+1.272748978,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 10 00:06:48.626528 kubelet[2259]: E0510 00:06:48.626421 2259 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.141:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.141:6443: connect: connection refused" interval="200ms" May 10 00:06:48.627483 kubelet[2259]: I0510 00:06:48.627133 2259 factory.go:221] Registration of the systemd container factory successfully May 10 00:06:48.627483 kubelet[2259]: I0510 00:06:48.627227 2259 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 10 00:06:48.628511 kubelet[2259]: I0510 00:06:48.628491 2259 factory.go:221] Registration of the containerd container factory successfully May 10 00:06:48.634278 kubelet[2259]: E0510 00:06:48.634241 2259 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 10 00:06:48.638032 kubelet[2259]: I0510 00:06:48.637989 2259 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 10 00:06:48.638963 kubelet[2259]: I0510 00:06:48.638932 2259 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 10 00:06:48.639113 kubelet[2259]: I0510 00:06:48.639089 2259 status_manager.go:217] "Starting to sync pod status with apiserver" May 10 00:06:48.639160 kubelet[2259]: I0510 00:06:48.639117 2259 kubelet.go:2337] "Starting kubelet main sync loop" May 10 00:06:48.639185 kubelet[2259]: E0510 00:06:48.639169 2259 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 10 00:06:48.640993 kubelet[2259]: W0510 00:06:48.640948 2259 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.141:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.141:6443: connect: connection refused May 10 00:06:48.641076 kubelet[2259]: E0510 00:06:48.641000 2259 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.141:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.141:6443: connect: connection refused May 10 00:06:48.643364 kubelet[2259]: I0510 00:06:48.643335 2259 cpu_manager.go:214] "Starting CPU manager" policy="none" May 10 00:06:48.643364 kubelet[2259]: I0510 00:06:48.643351 2259 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 10 00:06:48.643364 kubelet[2259]: I0510 00:06:48.643369 2259 state_mem.go:36] "Initialized new in-memory state store" May 10 00:06:48.702833 kubelet[2259]: I0510 00:06:48.702778 2259 policy_none.go:49] "None policy: Start" May 10 00:06:48.703603 kubelet[2259]: I0510 00:06:48.703585 2259 memory_manager.go:170] "Starting memorymanager" policy="None" May 10 00:06:48.703652 kubelet[2259]: I0510 00:06:48.703613 2259 state_mem.go:35] "Initializing new in-memory state store" May 10 00:06:48.709020 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 10 00:06:48.721590 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 10 00:06:48.722655 kubelet[2259]: I0510 00:06:48.722591 2259 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 10 00:06:48.722951 kubelet[2259]: E0510 00:06:48.722928 2259 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.141:6443/api/v1/nodes\": dial tcp 10.0.0.141:6443: connect: connection refused" node="localhost" May 10 00:06:48.724870 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 10 00:06:48.734766 kubelet[2259]: I0510 00:06:48.734738 2259 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 10 00:06:48.735179 kubelet[2259]: I0510 00:06:48.734987 2259 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 10 00:06:48.735179 kubelet[2259]: I0510 00:06:48.735103 2259 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 10 00:06:48.737154 kubelet[2259]: E0510 00:06:48.737130 2259 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 10 00:06:48.739385 kubelet[2259]: I0510 00:06:48.739298 2259 topology_manager.go:215] "Topology Admit Handler" podUID="6ece95f10dbffa04b25ec3439a115512" podNamespace="kube-system" podName="kube-scheduler-localhost" May 10 00:06:48.740305 kubelet[2259]: I0510 00:06:48.740267 2259 topology_manager.go:215] "Topology Admit Handler" podUID="e981c0feafc799c7d64369978b14f494" podNamespace="kube-system" podName="kube-apiserver-localhost" May 10 00:06:48.742284 kubelet[2259]: I0510 00:06:48.742252 2259 topology_manager.go:215] "Topology Admit Handler" podUID="b20b39a8540dba87b5883a6f0f602dba" podNamespace="kube-system" podName="kube-controller-manager-localhost" May 10 00:06:48.747982 systemd[1]: Created slice kubepods-burstable-pode981c0feafc799c7d64369978b14f494.slice - libcontainer container kubepods-burstable-pode981c0feafc799c7d64369978b14f494.slice. May 10 00:06:48.783314 systemd[1]: Created slice kubepods-burstable-pod6ece95f10dbffa04b25ec3439a115512.slice - libcontainer container kubepods-burstable-pod6ece95f10dbffa04b25ec3439a115512.slice. May 10 00:06:48.797298 systemd[1]: Created slice kubepods-burstable-podb20b39a8540dba87b5883a6f0f602dba.slice - libcontainer container kubepods-burstable-podb20b39a8540dba87b5883a6f0f602dba.slice. May 10 00:06:48.825794 kubelet[2259]: I0510 00:06:48.825129 2259 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6ece95f10dbffa04b25ec3439a115512-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6ece95f10dbffa04b25ec3439a115512\") " pod="kube-system/kube-scheduler-localhost" May 10 00:06:48.825794 kubelet[2259]: I0510 00:06:48.825169 2259 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e981c0feafc799c7d64369978b14f494-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"e981c0feafc799c7d64369978b14f494\") " pod="kube-system/kube-apiserver-localhost" May 10 00:06:48.825794 kubelet[2259]: I0510 00:06:48.825189 2259 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e981c0feafc799c7d64369978b14f494-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"e981c0feafc799c7d64369978b14f494\") " pod="kube-system/kube-apiserver-localhost" May 10 00:06:48.825794 kubelet[2259]: I0510 00:06:48.825205 2259 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e981c0feafc799c7d64369978b14f494-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"e981c0feafc799c7d64369978b14f494\") " pod="kube-system/kube-apiserver-localhost" May 10 00:06:48.825794 kubelet[2259]: I0510 00:06:48.825225 2259 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 10 00:06:48.826034 kubelet[2259]: I0510 00:06:48.825239 2259 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 10 00:06:48.826034 kubelet[2259]: I0510 00:06:48.825254 2259 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 10 00:06:48.826034 kubelet[2259]: I0510 00:06:48.825297 2259 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 10 00:06:48.826034 kubelet[2259]: I0510 00:06:48.825340 2259 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 10 00:06:48.827596 kubelet[2259]: E0510 00:06:48.827553 2259 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.141:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.141:6443: connect: connection refused" interval="400ms" May 10 00:06:48.925092 kubelet[2259]: I0510 00:06:48.925036 2259 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 10 00:06:48.925340 kubelet[2259]: E0510 00:06:48.925319 2259 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.141:6443/api/v1/nodes\": dial tcp 10.0.0.141:6443: connect: connection refused" node="localhost" May 10 00:06:49.081119 kubelet[2259]: E0510 00:06:49.081002 2259 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:06:49.081880 containerd[1455]: time="2025-05-10T00:06:49.081771597Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:e981c0feafc799c7d64369978b14f494,Namespace:kube-system,Attempt:0,}" May 10 00:06:49.096095 kubelet[2259]: E0510 00:06:49.096058 2259 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:06:49.096579 containerd[1455]: time="2025-05-10T00:06:49.096546448Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6ece95f10dbffa04b25ec3439a115512,Namespace:kube-system,Attempt:0,}" May 10 00:06:49.099927 kubelet[2259]: E0510 00:06:49.099898 2259 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:06:49.100336 containerd[1455]: time="2025-05-10T00:06:49.100297288Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b20b39a8540dba87b5883a6f0f602dba,Namespace:kube-system,Attempt:0,}" May 10 00:06:49.229015 kubelet[2259]: E0510 00:06:49.228959 2259 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.141:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.141:6443: connect: connection refused" interval="800ms" May 10 00:06:49.327499 kubelet[2259]: I0510 00:06:49.327463 2259 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 10 00:06:49.327899 kubelet[2259]: E0510 00:06:49.327856 2259 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.141:6443/api/v1/nodes\": dial tcp 10.0.0.141:6443: connect: connection refused" node="localhost" May 10 00:06:49.455736 kubelet[2259]: W0510 00:06:49.455672 2259 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.141:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.141:6443: connect: connection refused May 10 00:06:49.455736 kubelet[2259]: E0510 00:06:49.455737 2259 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.141:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.141:6443: connect: connection refused May 10 00:06:49.574356 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2077629397.mount: Deactivated successfully. May 10 00:06:49.580301 containerd[1455]: time="2025-05-10T00:06:49.580241381Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 10 00:06:49.582166 containerd[1455]: time="2025-05-10T00:06:49.582107028Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 10 00:06:49.582802 containerd[1455]: time="2025-05-10T00:06:49.582774530Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 10 00:06:49.583577 containerd[1455]: time="2025-05-10T00:06:49.583549392Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 10 00:06:49.584332 containerd[1455]: time="2025-05-10T00:06:49.584302310Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 10 00:06:49.585253 containerd[1455]: time="2025-05-10T00:06:49.585212031Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" May 10 00:06:49.586102 containerd[1455]: time="2025-05-10T00:06:49.586066513Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 10 00:06:49.588017 containerd[1455]: time="2025-05-10T00:06:49.587978485Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 10 00:06:49.590226 containerd[1455]: time="2025-05-10T00:06:49.590190914Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 508.304123ms" May 10 00:06:49.591463 containerd[1455]: time="2025-05-10T00:06:49.591429629Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 491.061834ms" May 10 00:06:49.595391 containerd[1455]: time="2025-05-10T00:06:49.595342269Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 498.722555ms" May 10 00:06:49.746353 containerd[1455]: time="2025-05-10T00:06:49.746165642Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 00:06:49.746353 containerd[1455]: time="2025-05-10T00:06:49.746233352Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 00:06:49.746353 containerd[1455]: time="2025-05-10T00:06:49.746255615Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:06:49.746353 containerd[1455]: time="2025-05-10T00:06:49.746289030Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 00:06:49.746501 containerd[1455]: time="2025-05-10T00:06:49.746362535Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 00:06:49.746501 containerd[1455]: time="2025-05-10T00:06:49.746383120Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:06:49.746501 containerd[1455]: time="2025-05-10T00:06:49.746462181Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:06:49.748033 containerd[1455]: time="2025-05-10T00:06:49.747696340Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:06:49.748282 containerd[1455]: time="2025-05-10T00:06:49.747953028Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 00:06:49.748369 containerd[1455]: time="2025-05-10T00:06:49.748305485Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 00:06:49.748369 containerd[1455]: time="2025-05-10T00:06:49.748337581Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:06:49.748441 containerd[1455]: time="2025-05-10T00:06:49.748410447Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:06:49.767005 systemd[1]: Started cri-containerd-171d3e57d22752581eab2bdde590f019c0205de4454749eaf82ccded7d94024b.scope - libcontainer container 171d3e57d22752581eab2bdde590f019c0205de4454749eaf82ccded7d94024b. May 10 00:06:49.768107 systemd[1]: Started cri-containerd-b17767b44466bc81566b2cfec6b2afd7a43f443b1642862fa38f39b40bf7127b.scope - libcontainer container b17767b44466bc81566b2cfec6b2afd7a43f443b1642862fa38f39b40bf7127b. May 10 00:06:49.770879 systemd[1]: Started cri-containerd-9fd42fbab418df11e5a53e1dfcd9283ea415dab6e60de78a245394475e5db4f9.scope - libcontainer container 9fd42fbab418df11e5a53e1dfcd9283ea415dab6e60de78a245394475e5db4f9. May 10 00:06:49.801755 containerd[1455]: time="2025-05-10T00:06:49.801691634Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6ece95f10dbffa04b25ec3439a115512,Namespace:kube-system,Attempt:0,} returns sandbox id \"171d3e57d22752581eab2bdde590f019c0205de4454749eaf82ccded7d94024b\"" May 10 00:06:49.804386 kubelet[2259]: E0510 00:06:49.804292 2259 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:06:49.804781 containerd[1455]: time="2025-05-10T00:06:49.804508211Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:e981c0feafc799c7d64369978b14f494,Namespace:kube-system,Attempt:0,} returns sandbox id \"9fd42fbab418df11e5a53e1dfcd9283ea415dab6e60de78a245394475e5db4f9\"" May 10 00:06:49.805672 kubelet[2259]: E0510 00:06:49.805414 2259 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:06:49.808332 containerd[1455]: time="2025-05-10T00:06:49.808281954Z" level=info msg="CreateContainer within sandbox \"9fd42fbab418df11e5a53e1dfcd9283ea415dab6e60de78a245394475e5db4f9\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 10 00:06:49.808450 containerd[1455]: time="2025-05-10T00:06:49.808360375Z" level=info msg="CreateContainer within sandbox \"171d3e57d22752581eab2bdde590f019c0205de4454749eaf82ccded7d94024b\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 10 00:06:49.814462 containerd[1455]: time="2025-05-10T00:06:49.814397189Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b20b39a8540dba87b5883a6f0f602dba,Namespace:kube-system,Attempt:0,} returns sandbox id \"b17767b44466bc81566b2cfec6b2afd7a43f443b1642862fa38f39b40bf7127b\"" May 10 00:06:49.815261 kubelet[2259]: E0510 00:06:49.815233 2259 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:06:49.818698 containerd[1455]: time="2025-05-10T00:06:49.818572352Z" level=info msg="CreateContainer within sandbox \"b17767b44466bc81566b2cfec6b2afd7a43f443b1642862fa38f39b40bf7127b\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 10 00:06:49.828705 containerd[1455]: time="2025-05-10T00:06:49.828656145Z" level=info msg="CreateContainer within sandbox \"9fd42fbab418df11e5a53e1dfcd9283ea415dab6e60de78a245394475e5db4f9\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"76f6b80a258dc4fa02833c6af9e6a09625c58b7805c55497c42261699b8925e9\"" May 10 00:06:49.829349 containerd[1455]: time="2025-05-10T00:06:49.829320729Z" level=info msg="StartContainer for \"76f6b80a258dc4fa02833c6af9e6a09625c58b7805c55497c42261699b8925e9\"" May 10 00:06:49.832917 containerd[1455]: time="2025-05-10T00:06:49.832866882Z" level=info msg="CreateContainer within sandbox \"171d3e57d22752581eab2bdde590f019c0205de4454749eaf82ccded7d94024b\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"919e1dabc0828387351bf1e1b411a6235f4d769a87820d5ffce1eb3305627c02\"" May 10 00:06:49.833554 containerd[1455]: time="2025-05-10T00:06:49.833520114Z" level=info msg="StartContainer for \"919e1dabc0828387351bf1e1b411a6235f4d769a87820d5ffce1eb3305627c02\"" May 10 00:06:49.843210 containerd[1455]: time="2025-05-10T00:06:49.843147727Z" level=info msg="CreateContainer within sandbox \"b17767b44466bc81566b2cfec6b2afd7a43f443b1642862fa38f39b40bf7127b\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"36dc06f76ecce439e60559a21f760f521864ebd0581641ba709a422f62d24424\"" May 10 00:06:49.843763 containerd[1455]: time="2025-05-10T00:06:49.843709668Z" level=info msg="StartContainer for \"36dc06f76ecce439e60559a21f760f521864ebd0581641ba709a422f62d24424\"" May 10 00:06:49.845101 kubelet[2259]: W0510 00:06:49.845042 2259 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.141:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.141:6443: connect: connection refused May 10 00:06:49.845561 kubelet[2259]: E0510 00:06:49.845526 2259 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.141:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.141:6443: connect: connection refused May 10 00:06:49.860021 systemd[1]: Started cri-containerd-76f6b80a258dc4fa02833c6af9e6a09625c58b7805c55497c42261699b8925e9.scope - libcontainer container 76f6b80a258dc4fa02833c6af9e6a09625c58b7805c55497c42261699b8925e9. May 10 00:06:49.861019 systemd[1]: Started cri-containerd-919e1dabc0828387351bf1e1b411a6235f4d769a87820d5ffce1eb3305627c02.scope - libcontainer container 919e1dabc0828387351bf1e1b411a6235f4d769a87820d5ffce1eb3305627c02. May 10 00:06:49.877031 systemd[1]: Started cri-containerd-36dc06f76ecce439e60559a21f760f521864ebd0581641ba709a422f62d24424.scope - libcontainer container 36dc06f76ecce439e60559a21f760f521864ebd0581641ba709a422f62d24424. May 10 00:06:49.920804 containerd[1455]: time="2025-05-10T00:06:49.920533761Z" level=info msg="StartContainer for \"76f6b80a258dc4fa02833c6af9e6a09625c58b7805c55497c42261699b8925e9\" returns successfully" May 10 00:06:49.920804 containerd[1455]: time="2025-05-10T00:06:49.920551627Z" level=info msg="StartContainer for \"919e1dabc0828387351bf1e1b411a6235f4d769a87820d5ffce1eb3305627c02\" returns successfully" May 10 00:06:49.951839 containerd[1455]: time="2025-05-10T00:06:49.951765966Z" level=info msg="StartContainer for \"36dc06f76ecce439e60559a21f760f521864ebd0581641ba709a422f62d24424\" returns successfully" May 10 00:06:50.032761 kubelet[2259]: E0510 00:06:50.032620 2259 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.141:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.141:6443: connect: connection refused" interval="1.6s" May 10 00:06:50.063385 kubelet[2259]: W0510 00:06:50.063314 2259 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.141:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.141:6443: connect: connection refused May 10 00:06:50.063385 kubelet[2259]: E0510 00:06:50.063361 2259 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.141:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.141:6443: connect: connection refused May 10 00:06:50.132821 kubelet[2259]: I0510 00:06:50.132679 2259 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 10 00:06:50.136864 kubelet[2259]: E0510 00:06:50.136078 2259 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.141:6443/api/v1/nodes\": dial tcp 10.0.0.141:6443: connect: connection refused" node="localhost" May 10 00:06:50.144388 kubelet[2259]: W0510 00:06:50.144307 2259 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.141:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.141:6443: connect: connection refused May 10 00:06:50.144590 kubelet[2259]: E0510 00:06:50.144565 2259 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.141:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.141:6443: connect: connection refused May 10 00:06:50.649380 kubelet[2259]: E0510 00:06:50.649281 2259 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:06:50.652047 kubelet[2259]: E0510 00:06:50.651998 2259 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:06:50.653667 kubelet[2259]: E0510 00:06:50.653638 2259 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:06:51.636539 kubelet[2259]: E0510 00:06:51.636503 2259 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" May 10 00:06:51.655431 kubelet[2259]: E0510 00:06:51.655403 2259 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:06:51.737467 kubelet[2259]: I0510 00:06:51.737438 2259 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 10 00:06:51.746108 kubelet[2259]: I0510 00:06:51.746068 2259 kubelet_node_status.go:76] "Successfully registered node" node="localhost" May 10 00:06:51.754092 kubelet[2259]: E0510 00:06:51.754066 2259 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 10 00:06:51.854668 kubelet[2259]: E0510 00:06:51.854613 2259 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 10 00:06:51.955399 kubelet[2259]: E0510 00:06:51.955358 2259 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 10 00:06:51.982240 kubelet[2259]: E0510 00:06:51.982203 2259 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:06:52.056038 kubelet[2259]: E0510 00:06:52.055980 2259 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 10 00:06:52.156951 kubelet[2259]: E0510 00:06:52.156903 2259 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 10 00:06:52.257651 kubelet[2259]: E0510 00:06:52.257527 2259 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 10 00:06:52.358351 kubelet[2259]: E0510 00:06:52.358313 2259 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 10 00:06:52.459362 kubelet[2259]: E0510 00:06:52.459316 2259 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 10 00:06:52.560267 kubelet[2259]: E0510 00:06:52.559851 2259 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 10 00:06:52.660306 kubelet[2259]: E0510 00:06:52.660273 2259 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 10 00:06:52.760934 kubelet[2259]: E0510 00:06:52.760877 2259 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 10 00:06:52.861827 kubelet[2259]: E0510 00:06:52.861561 2259 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 10 00:06:52.962464 kubelet[2259]: E0510 00:06:52.962417 2259 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 10 00:06:53.300072 systemd[1]: Reloading requested from client PID 2538 ('systemctl') (unit session-7.scope)... May 10 00:06:53.300089 systemd[1]: Reloading... May 10 00:06:53.364932 zram_generator::config[2580]: No configuration found. May 10 00:06:53.448750 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 10 00:06:53.512355 systemd[1]: Reloading finished in 211 ms. May 10 00:06:53.541793 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 10 00:06:53.542010 kubelet[2259]: I0510 00:06:53.541980 2259 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 10 00:06:53.560038 systemd[1]: kubelet.service: Deactivated successfully. May 10 00:06:53.560302 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 10 00:06:53.560379 systemd[1]: kubelet.service: Consumed 1.605s CPU time, 114.3M memory peak, 0B memory swap peak. May 10 00:06:53.574202 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 10 00:06:53.664807 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 10 00:06:53.668651 (kubelet)[2619]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 10 00:06:53.712652 kubelet[2619]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 10 00:06:53.712652 kubelet[2619]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 10 00:06:53.712652 kubelet[2619]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 10 00:06:53.713014 kubelet[2619]: I0510 00:06:53.712684 2619 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 10 00:06:53.716489 kubelet[2619]: I0510 00:06:53.716451 2619 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 10 00:06:53.716489 kubelet[2619]: I0510 00:06:53.716475 2619 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 10 00:06:53.716667 kubelet[2619]: I0510 00:06:53.716643 2619 server.go:927] "Client rotation is on, will bootstrap in background" May 10 00:06:53.717947 kubelet[2619]: I0510 00:06:53.717927 2619 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 10 00:06:53.718991 kubelet[2619]: I0510 00:06:53.718974 2619 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 10 00:06:53.723922 kubelet[2619]: I0510 00:06:53.723891 2619 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 10 00:06:53.724114 kubelet[2619]: I0510 00:06:53.724077 2619 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 10 00:06:53.724238 kubelet[2619]: I0510 00:06:53.724101 2619 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 10 00:06:53.724238 kubelet[2619]: I0510 00:06:53.724238 2619 topology_manager.go:138] "Creating topology manager with none policy" May 10 00:06:53.724339 kubelet[2619]: I0510 00:06:53.724247 2619 container_manager_linux.go:301] "Creating device plugin manager" May 10 00:06:53.724339 kubelet[2619]: I0510 00:06:53.724278 2619 state_mem.go:36] "Initialized new in-memory state store" May 10 00:06:53.724379 kubelet[2619]: I0510 00:06:53.724365 2619 kubelet.go:400] "Attempting to sync node with API server" May 10 00:06:53.724379 kubelet[2619]: I0510 00:06:53.724376 2619 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 10 00:06:53.724421 kubelet[2619]: I0510 00:06:53.724400 2619 kubelet.go:312] "Adding apiserver pod source" May 10 00:06:53.724421 kubelet[2619]: I0510 00:06:53.724411 2619 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 10 00:06:53.729273 kubelet[2619]: I0510 00:06:53.729225 2619 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" May 10 00:06:53.729424 kubelet[2619]: I0510 00:06:53.729407 2619 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 10 00:06:53.729913 kubelet[2619]: I0510 00:06:53.729778 2619 server.go:1264] "Started kubelet" May 10 00:06:53.729913 kubelet[2619]: I0510 00:06:53.729861 2619 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 10 00:06:53.730248 kubelet[2619]: I0510 00:06:53.730207 2619 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 10 00:06:53.730507 kubelet[2619]: I0510 00:06:53.730488 2619 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 10 00:06:53.731310 kubelet[2619]: I0510 00:06:53.731277 2619 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 10 00:06:53.732866 kubelet[2619]: I0510 00:06:53.731761 2619 server.go:455] "Adding debug handlers to kubelet server" May 10 00:06:53.732866 kubelet[2619]: I0510 00:06:53.732725 2619 volume_manager.go:291] "Starting Kubelet Volume Manager" May 10 00:06:53.732866 kubelet[2619]: I0510 00:06:53.732792 2619 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 10 00:06:53.733081 kubelet[2619]: I0510 00:06:53.733067 2619 reconciler.go:26] "Reconciler: start to sync state" May 10 00:06:53.739299 kubelet[2619]: I0510 00:06:53.739266 2619 factory.go:221] Registration of the systemd container factory successfully May 10 00:06:53.739859 kubelet[2619]: I0510 00:06:53.739361 2619 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 10 00:06:53.747687 kubelet[2619]: E0510 00:06:53.747650 2619 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 10 00:06:53.748737 kubelet[2619]: I0510 00:06:53.748703 2619 factory.go:221] Registration of the containerd container factory successfully May 10 00:06:53.752281 kubelet[2619]: I0510 00:06:53.752227 2619 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 10 00:06:53.753256 kubelet[2619]: I0510 00:06:53.753178 2619 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 10 00:06:53.753256 kubelet[2619]: I0510 00:06:53.753215 2619 status_manager.go:217] "Starting to sync pod status with apiserver" May 10 00:06:53.753256 kubelet[2619]: I0510 00:06:53.753230 2619 kubelet.go:2337] "Starting kubelet main sync loop" May 10 00:06:53.753370 kubelet[2619]: E0510 00:06:53.753267 2619 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 10 00:06:53.779247 kubelet[2619]: I0510 00:06:53.779223 2619 cpu_manager.go:214] "Starting CPU manager" policy="none" May 10 00:06:53.779247 kubelet[2619]: I0510 00:06:53.779241 2619 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 10 00:06:53.779361 kubelet[2619]: I0510 00:06:53.779260 2619 state_mem.go:36] "Initialized new in-memory state store" May 10 00:06:53.779405 kubelet[2619]: I0510 00:06:53.779388 2619 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 10 00:06:53.779428 kubelet[2619]: I0510 00:06:53.779403 2619 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 10 00:06:53.779428 kubelet[2619]: I0510 00:06:53.779419 2619 policy_none.go:49] "None policy: Start" May 10 00:06:53.779963 kubelet[2619]: I0510 00:06:53.779946 2619 memory_manager.go:170] "Starting memorymanager" policy="None" May 10 00:06:53.780025 kubelet[2619]: I0510 00:06:53.779971 2619 state_mem.go:35] "Initializing new in-memory state store" May 10 00:06:53.780140 kubelet[2619]: I0510 00:06:53.780125 2619 state_mem.go:75] "Updated machine memory state" May 10 00:06:53.783617 kubelet[2619]: I0510 00:06:53.783598 2619 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 10 00:06:53.783777 kubelet[2619]: I0510 00:06:53.783741 2619 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 10 00:06:53.783848 kubelet[2619]: I0510 00:06:53.783830 2619 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 10 00:06:53.836191 kubelet[2619]: I0510 00:06:53.836054 2619 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 10 00:06:53.843045 kubelet[2619]: I0510 00:06:53.843018 2619 kubelet_node_status.go:112] "Node was previously registered" node="localhost" May 10 00:06:53.843123 kubelet[2619]: I0510 00:06:53.843088 2619 kubelet_node_status.go:76] "Successfully registered node" node="localhost" May 10 00:06:53.853773 kubelet[2619]: I0510 00:06:53.853715 2619 topology_manager.go:215] "Topology Admit Handler" podUID="b20b39a8540dba87b5883a6f0f602dba" podNamespace="kube-system" podName="kube-controller-manager-localhost" May 10 00:06:53.853920 kubelet[2619]: I0510 00:06:53.853809 2619 topology_manager.go:215] "Topology Admit Handler" podUID="6ece95f10dbffa04b25ec3439a115512" podNamespace="kube-system" podName="kube-scheduler-localhost" May 10 00:06:53.853920 kubelet[2619]: I0510 00:06:53.853861 2619 topology_manager.go:215] "Topology Admit Handler" podUID="e981c0feafc799c7d64369978b14f494" podNamespace="kube-system" podName="kube-apiserver-localhost" May 10 00:06:53.935197 kubelet[2619]: I0510 00:06:53.935140 2619 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 10 00:06:53.935197 kubelet[2619]: I0510 00:06:53.935184 2619 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 10 00:06:53.935197 kubelet[2619]: I0510 00:06:53.935205 2619 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6ece95f10dbffa04b25ec3439a115512-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6ece95f10dbffa04b25ec3439a115512\") " pod="kube-system/kube-scheduler-localhost" May 10 00:06:53.935197 kubelet[2619]: I0510 00:06:53.935223 2619 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e981c0feafc799c7d64369978b14f494-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"e981c0feafc799c7d64369978b14f494\") " pod="kube-system/kube-apiserver-localhost" May 10 00:06:53.935443 kubelet[2619]: I0510 00:06:53.935240 2619 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e981c0feafc799c7d64369978b14f494-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"e981c0feafc799c7d64369978b14f494\") " pod="kube-system/kube-apiserver-localhost" May 10 00:06:53.935443 kubelet[2619]: I0510 00:06:53.935257 2619 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 10 00:06:53.935443 kubelet[2619]: I0510 00:06:53.935273 2619 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 10 00:06:53.935443 kubelet[2619]: I0510 00:06:53.935288 2619 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 10 00:06:53.935443 kubelet[2619]: I0510 00:06:53.935301 2619 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e981c0feafc799c7d64369978b14f494-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"e981c0feafc799c7d64369978b14f494\") " pod="kube-system/kube-apiserver-localhost" May 10 00:06:54.168386 kubelet[2619]: E0510 00:06:54.168271 2619 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:06:54.168487 kubelet[2619]: E0510 00:06:54.168425 2619 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:06:54.168826 kubelet[2619]: E0510 00:06:54.168802 2619 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:06:54.725697 kubelet[2619]: I0510 00:06:54.725615 2619 apiserver.go:52] "Watching apiserver" May 10 00:06:54.733344 kubelet[2619]: I0510 00:06:54.733297 2619 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 10 00:06:54.767225 kubelet[2619]: E0510 00:06:54.767197 2619 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:06:54.769044 kubelet[2619]: E0510 00:06:54.768996 2619 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:06:54.780488 kubelet[2619]: E0510 00:06:54.780414 2619 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 10 00:06:54.781025 kubelet[2619]: E0510 00:06:54.780994 2619 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:06:54.825224 kubelet[2619]: I0510 00:06:54.825112 2619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.825078584 podStartE2EDuration="1.825078584s" podCreationTimestamp="2025-05-10 00:06:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-10 00:06:54.824172629 +0000 UTC m=+1.152105454" watchObservedRunningTime="2025-05-10 00:06:54.825078584 +0000 UTC m=+1.153011409" May 10 00:06:54.843890 kubelet[2619]: I0510 00:06:54.843664 2619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.843646636 podStartE2EDuration="1.843646636s" podCreationTimestamp="2025-05-10 00:06:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-10 00:06:54.834652454 +0000 UTC m=+1.162585279" watchObservedRunningTime="2025-05-10 00:06:54.843646636 +0000 UTC m=+1.171579461" May 10 00:06:54.858710 kubelet[2619]: I0510 00:06:54.858515 2619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.858498781 podStartE2EDuration="1.858498781s" podCreationTimestamp="2025-05-10 00:06:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-10 00:06:54.844578593 +0000 UTC m=+1.172511418" watchObservedRunningTime="2025-05-10 00:06:54.858498781 +0000 UTC m=+1.186431566" May 10 00:06:55.769505 kubelet[2619]: E0510 00:06:55.769462 2619 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:06:55.769825 kubelet[2619]: E0510 00:06:55.769707 2619 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:06:55.769825 kubelet[2619]: E0510 00:06:55.769789 2619 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:06:58.477344 sudo[1630]: pam_unix(sudo:session): session closed for user root May 10 00:06:58.478761 sshd[1629]: Connection closed by 10.0.0.1 port 35522 May 10 00:06:58.479266 sshd-session[1627]: pam_unix(sshd:session): session closed for user core May 10 00:06:58.482022 systemd[1]: sshd@6-10.0.0.141:22-10.0.0.1:35522.service: Deactivated successfully. May 10 00:06:58.483667 systemd[1]: session-7.scope: Deactivated successfully. May 10 00:06:58.483829 systemd[1]: session-7.scope: Consumed 8.564s CPU time, 190.5M memory peak, 0B memory swap peak. May 10 00:06:58.485531 systemd-logind[1429]: Session 7 logged out. Waiting for processes to exit. May 10 00:06:58.486372 systemd-logind[1429]: Removed session 7. May 10 00:07:04.458689 kubelet[2619]: E0510 00:07:04.458651 2619 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:07:04.787789 kubelet[2619]: E0510 00:07:04.787673 2619 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:07:05.292143 kubelet[2619]: E0510 00:07:05.291422 2619 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:07:05.328559 kubelet[2619]: E0510 00:07:05.328448 2619 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:07:06.104742 update_engine[1432]: I20250510 00:07:06.104468 1432 update_attempter.cc:509] Updating boot flags... May 10 00:07:06.157954 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2718) May 10 00:07:06.175872 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2716) May 10 00:07:06.201889 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2716) May 10 00:07:08.089561 kubelet[2619]: I0510 00:07:08.089528 2619 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 10 00:07:08.103059 containerd[1455]: time="2025-05-10T00:07:08.102998521Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 10 00:07:08.103434 kubelet[2619]: I0510 00:07:08.103372 2619 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 10 00:07:08.850672 kubelet[2619]: I0510 00:07:08.850632 2619 topology_manager.go:215] "Topology Admit Handler" podUID="265904d6-8734-4695-8634-5368d82ecfb7" podNamespace="kube-system" podName="kube-proxy-9lfpg" May 10 00:07:08.859880 systemd[1]: Created slice kubepods-besteffort-pod265904d6_8734_4695_8634_5368d82ecfb7.slice - libcontainer container kubepods-besteffort-pod265904d6_8734_4695_8634_5368d82ecfb7.slice. May 10 00:07:08.955077 kubelet[2619]: I0510 00:07:08.954962 2619 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/265904d6-8734-4695-8634-5368d82ecfb7-kube-proxy\") pod \"kube-proxy-9lfpg\" (UID: \"265904d6-8734-4695-8634-5368d82ecfb7\") " pod="kube-system/kube-proxy-9lfpg" May 10 00:07:08.955077 kubelet[2619]: I0510 00:07:08.955005 2619 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/265904d6-8734-4695-8634-5368d82ecfb7-xtables-lock\") pod \"kube-proxy-9lfpg\" (UID: \"265904d6-8734-4695-8634-5368d82ecfb7\") " pod="kube-system/kube-proxy-9lfpg" May 10 00:07:08.955077 kubelet[2619]: I0510 00:07:08.955028 2619 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/265904d6-8734-4695-8634-5368d82ecfb7-lib-modules\") pod \"kube-proxy-9lfpg\" (UID: \"265904d6-8734-4695-8634-5368d82ecfb7\") " pod="kube-system/kube-proxy-9lfpg" May 10 00:07:08.955077 kubelet[2619]: I0510 00:07:08.955045 2619 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v944w\" (UniqueName: \"kubernetes.io/projected/265904d6-8734-4695-8634-5368d82ecfb7-kube-api-access-v944w\") pod \"kube-proxy-9lfpg\" (UID: \"265904d6-8734-4695-8634-5368d82ecfb7\") " pod="kube-system/kube-proxy-9lfpg" May 10 00:07:09.067830 kubelet[2619]: E0510 00:07:09.067765 2619 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found May 10 00:07:09.067830 kubelet[2619]: E0510 00:07:09.067808 2619 projected.go:200] Error preparing data for projected volume kube-api-access-v944w for pod kube-system/kube-proxy-9lfpg: configmap "kube-root-ca.crt" not found May 10 00:07:09.068004 kubelet[2619]: E0510 00:07:09.067887 2619 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/265904d6-8734-4695-8634-5368d82ecfb7-kube-api-access-v944w podName:265904d6-8734-4695-8634-5368d82ecfb7 nodeName:}" failed. No retries permitted until 2025-05-10 00:07:09.567866317 +0000 UTC m=+15.895799102 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-v944w" (UniqueName: "kubernetes.io/projected/265904d6-8734-4695-8634-5368d82ecfb7-kube-api-access-v944w") pod "kube-proxy-9lfpg" (UID: "265904d6-8734-4695-8634-5368d82ecfb7") : configmap "kube-root-ca.crt" not found May 10 00:07:09.167971 kubelet[2619]: I0510 00:07:09.167546 2619 topology_manager.go:215] "Topology Admit Handler" podUID="69341d63-0f53-4656-a849-65a69f2a0e3f" podNamespace="tigera-operator" podName="tigera-operator-797db67f8-cxvnv" May 10 00:07:09.176147 systemd[1]: Created slice kubepods-besteffort-pod69341d63_0f53_4656_a849_65a69f2a0e3f.slice - libcontainer container kubepods-besteffort-pod69341d63_0f53_4656_a849_65a69f2a0e3f.slice. May 10 00:07:09.256383 kubelet[2619]: I0510 00:07:09.256328 2619 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/69341d63-0f53-4656-a849-65a69f2a0e3f-var-lib-calico\") pod \"tigera-operator-797db67f8-cxvnv\" (UID: \"69341d63-0f53-4656-a849-65a69f2a0e3f\") " pod="tigera-operator/tigera-operator-797db67f8-cxvnv" May 10 00:07:09.256383 kubelet[2619]: I0510 00:07:09.256380 2619 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbrrq\" (UniqueName: \"kubernetes.io/projected/69341d63-0f53-4656-a849-65a69f2a0e3f-kube-api-access-jbrrq\") pod \"tigera-operator-797db67f8-cxvnv\" (UID: \"69341d63-0f53-4656-a849-65a69f2a0e3f\") " pod="tigera-operator/tigera-operator-797db67f8-cxvnv" May 10 00:07:09.487107 containerd[1455]: time="2025-05-10T00:07:09.486566658Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-797db67f8-cxvnv,Uid:69341d63-0f53-4656-a849-65a69f2a0e3f,Namespace:tigera-operator,Attempt:0,}" May 10 00:07:09.506156 containerd[1455]: time="2025-05-10T00:07:09.505995698Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 00:07:09.506156 containerd[1455]: time="2025-05-10T00:07:09.506071460Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 00:07:09.506156 containerd[1455]: time="2025-05-10T00:07:09.506087821Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:07:09.506320 containerd[1455]: time="2025-05-10T00:07:09.506217506Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:07:09.525034 systemd[1]: Started cri-containerd-f3a79e34e87763a1afd1894acf0294d1f3ad35648ffdf78a7cba8fdea837ff8b.scope - libcontainer container f3a79e34e87763a1afd1894acf0294d1f3ad35648ffdf78a7cba8fdea837ff8b. May 10 00:07:09.550246 containerd[1455]: time="2025-05-10T00:07:09.550201854Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-797db67f8-cxvnv,Uid:69341d63-0f53-4656-a849-65a69f2a0e3f,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"f3a79e34e87763a1afd1894acf0294d1f3ad35648ffdf78a7cba8fdea837ff8b\"" May 10 00:07:09.555633 containerd[1455]: time="2025-05-10T00:07:09.555602054Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\"" May 10 00:07:09.766955 kubelet[2619]: E0510 00:07:09.766829 2619 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:07:09.767902 containerd[1455]: time="2025-05-10T00:07:09.767267291Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9lfpg,Uid:265904d6-8734-4695-8634-5368d82ecfb7,Namespace:kube-system,Attempt:0,}" May 10 00:07:09.786905 containerd[1455]: time="2025-05-10T00:07:09.786800814Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 00:07:09.786905 containerd[1455]: time="2025-05-10T00:07:09.786877457Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 00:07:09.786905 containerd[1455]: time="2025-05-10T00:07:09.786889337Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:07:09.787156 containerd[1455]: time="2025-05-10T00:07:09.786965700Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:07:09.808070 systemd[1]: Started cri-containerd-c788f2c6206bc35f0cdcc9ca2c977bbd39f88808b5477cfbf9565bf31448daa6.scope - libcontainer container c788f2c6206bc35f0cdcc9ca2c977bbd39f88808b5477cfbf9565bf31448daa6. May 10 00:07:09.829025 containerd[1455]: time="2025-05-10T00:07:09.828933174Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9lfpg,Uid:265904d6-8734-4695-8634-5368d82ecfb7,Namespace:kube-system,Attempt:0,} returns sandbox id \"c788f2c6206bc35f0cdcc9ca2c977bbd39f88808b5477cfbf9565bf31448daa6\"" May 10 00:07:09.830491 kubelet[2619]: E0510 00:07:09.830262 2619 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:07:09.833898 containerd[1455]: time="2025-05-10T00:07:09.833799554Z" level=info msg="CreateContainer within sandbox \"c788f2c6206bc35f0cdcc9ca2c977bbd39f88808b5477cfbf9565bf31448daa6\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 10 00:07:09.847133 containerd[1455]: time="2025-05-10T00:07:09.847083886Z" level=info msg="CreateContainer within sandbox \"c788f2c6206bc35f0cdcc9ca2c977bbd39f88808b5477cfbf9565bf31448daa6\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"d35a6d50f343ed894794e4c4b0bd971ff0bf329feee704cf87b1ac3dc05be185\"" May 10 00:07:09.854055 containerd[1455]: time="2025-05-10T00:07:09.854005022Z" level=info msg="StartContainer for \"d35a6d50f343ed894794e4c4b0bd971ff0bf329feee704cf87b1ac3dc05be185\"" May 10 00:07:09.883051 systemd[1]: Started cri-containerd-d35a6d50f343ed894794e4c4b0bd971ff0bf329feee704cf87b1ac3dc05be185.scope - libcontainer container d35a6d50f343ed894794e4c4b0bd971ff0bf329feee704cf87b1ac3dc05be185. May 10 00:07:09.912118 containerd[1455]: time="2025-05-10T00:07:09.911991649Z" level=info msg="StartContainer for \"d35a6d50f343ed894794e4c4b0bd971ff0bf329feee704cf87b1ac3dc05be185\" returns successfully" May 10 00:07:10.803229 kubelet[2619]: E0510 00:07:10.803187 2619 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:07:11.283425 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1170051907.mount: Deactivated successfully. May 10 00:07:11.948922 containerd[1455]: time="2025-05-10T00:07:11.948871563Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:07:11.949516 containerd[1455]: time="2025-05-10T00:07:11.949472823Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.7: active requests=0, bytes read=19323084" May 10 00:07:11.950195 containerd[1455]: time="2025-05-10T00:07:11.950170126Z" level=info msg="ImageCreate event name:\"sha256:27f7c2cfac802523e44ecd16453a4cc992f6c7d610c13054f2715a7cb4370565\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:07:11.953205 containerd[1455]: time="2025-05-10T00:07:11.953164947Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:07:11.954055 containerd[1455]: time="2025-05-10T00:07:11.954013616Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.7\" with image id \"sha256:27f7c2cfac802523e44ecd16453a4cc992f6c7d610c13054f2715a7cb4370565\", repo tag \"quay.io/tigera/operator:v1.36.7\", repo digest \"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\", size \"19319079\" in 2.39837656s" May 10 00:07:11.954055 containerd[1455]: time="2025-05-10T00:07:11.954052657Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\" returns image reference \"sha256:27f7c2cfac802523e44ecd16453a4cc992f6c7d610c13054f2715a7cb4370565\"" May 10 00:07:11.965503 containerd[1455]: time="2025-05-10T00:07:11.965469961Z" level=info msg="CreateContainer within sandbox \"f3a79e34e87763a1afd1894acf0294d1f3ad35648ffdf78a7cba8fdea837ff8b\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" May 10 00:07:11.977584 containerd[1455]: time="2025-05-10T00:07:11.977536967Z" level=info msg="CreateContainer within sandbox \"f3a79e34e87763a1afd1894acf0294d1f3ad35648ffdf78a7cba8fdea837ff8b\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"1b39984b087a704211fb95adb247828f7c16562d0d605ad413eef08b70237064\"" May 10 00:07:11.978015 containerd[1455]: time="2025-05-10T00:07:11.977985742Z" level=info msg="StartContainer for \"1b39984b087a704211fb95adb247828f7c16562d0d605ad413eef08b70237064\"" May 10 00:07:12.010049 systemd[1]: Started cri-containerd-1b39984b087a704211fb95adb247828f7c16562d0d605ad413eef08b70237064.scope - libcontainer container 1b39984b087a704211fb95adb247828f7c16562d0d605ad413eef08b70237064. May 10 00:07:12.063713 containerd[1455]: time="2025-05-10T00:07:12.061936594Z" level=info msg="StartContainer for \"1b39984b087a704211fb95adb247828f7c16562d0d605ad413eef08b70237064\" returns successfully" May 10 00:07:12.825569 kubelet[2619]: I0510 00:07:12.825497 2619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-9lfpg" podStartSLOduration=4.825481279 podStartE2EDuration="4.825481279s" podCreationTimestamp="2025-05-10 00:07:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-10 00:07:10.823803934 +0000 UTC m=+17.151736759" watchObservedRunningTime="2025-05-10 00:07:12.825481279 +0000 UTC m=+19.153414104" May 10 00:07:12.826352 kubelet[2619]: I0510 00:07:12.826228 2619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-797db67f8-cxvnv" podStartSLOduration=1.42396473 podStartE2EDuration="3.826219263s" podCreationTimestamp="2025-05-10 00:07:09 +0000 UTC" firstStartedPulling="2025-05-10 00:07:09.555083195 +0000 UTC m=+15.883016020" lastFinishedPulling="2025-05-10 00:07:11.957337728 +0000 UTC m=+18.285270553" observedRunningTime="2025-05-10 00:07:12.825347755 +0000 UTC m=+19.153280580" watchObservedRunningTime="2025-05-10 00:07:12.826219263 +0000 UTC m=+19.154152088" May 10 00:07:15.524662 kubelet[2619]: I0510 00:07:15.523660 2619 topology_manager.go:215] "Topology Admit Handler" podUID="d743a4bb-0c0b-4c0e-8393-270ad6a7f9f0" podNamespace="calico-system" podName="calico-typha-5d56bdb946-bcbdr" May 10 00:07:15.533722 systemd[1]: Created slice kubepods-besteffort-podd743a4bb_0c0b_4c0e_8393_270ad6a7f9f0.slice - libcontainer container kubepods-besteffort-podd743a4bb_0c0b_4c0e_8393_270ad6a7f9f0.slice. May 10 00:07:15.593283 kubelet[2619]: I0510 00:07:15.591996 2619 topology_manager.go:215] "Topology Admit Handler" podUID="39fee0d3-46e2-4f23-b4a9-d4f2d60414e9" podNamespace="calico-system" podName="calico-node-cczsl" May 10 00:07:15.597795 kubelet[2619]: I0510 00:07:15.597760 2619 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/d743a4bb-0c0b-4c0e-8393-270ad6a7f9f0-typha-certs\") pod \"calico-typha-5d56bdb946-bcbdr\" (UID: \"d743a4bb-0c0b-4c0e-8393-270ad6a7f9f0\") " pod="calico-system/calico-typha-5d56bdb946-bcbdr" May 10 00:07:15.597900 kubelet[2619]: I0510 00:07:15.597801 2619 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pg92r\" (UniqueName: \"kubernetes.io/projected/d743a4bb-0c0b-4c0e-8393-270ad6a7f9f0-kube-api-access-pg92r\") pod \"calico-typha-5d56bdb946-bcbdr\" (UID: \"d743a4bb-0c0b-4c0e-8393-270ad6a7f9f0\") " pod="calico-system/calico-typha-5d56bdb946-bcbdr" May 10 00:07:15.597900 kubelet[2619]: I0510 00:07:15.597821 2619 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d743a4bb-0c0b-4c0e-8393-270ad6a7f9f0-tigera-ca-bundle\") pod \"calico-typha-5d56bdb946-bcbdr\" (UID: \"d743a4bb-0c0b-4c0e-8393-270ad6a7f9f0\") " pod="calico-system/calico-typha-5d56bdb946-bcbdr" May 10 00:07:15.600452 systemd[1]: Created slice kubepods-besteffort-pod39fee0d3_46e2_4f23_b4a9_d4f2d60414e9.slice - libcontainer container kubepods-besteffort-pod39fee0d3_46e2_4f23_b4a9_d4f2d60414e9.slice. May 10 00:07:15.698616 kubelet[2619]: I0510 00:07:15.698574 2619 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/39fee0d3-46e2-4f23-b4a9-d4f2d60414e9-cni-log-dir\") pod \"calico-node-cczsl\" (UID: \"39fee0d3-46e2-4f23-b4a9-d4f2d60414e9\") " pod="calico-system/calico-node-cczsl" May 10 00:07:15.698616 kubelet[2619]: I0510 00:07:15.698617 2619 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xlbw\" (UniqueName: \"kubernetes.io/projected/39fee0d3-46e2-4f23-b4a9-d4f2d60414e9-kube-api-access-2xlbw\") pod \"calico-node-cczsl\" (UID: \"39fee0d3-46e2-4f23-b4a9-d4f2d60414e9\") " pod="calico-system/calico-node-cczsl" May 10 00:07:15.699286 kubelet[2619]: I0510 00:07:15.699243 2619 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/39fee0d3-46e2-4f23-b4a9-d4f2d60414e9-xtables-lock\") pod \"calico-node-cczsl\" (UID: \"39fee0d3-46e2-4f23-b4a9-d4f2d60414e9\") " pod="calico-system/calico-node-cczsl" May 10 00:07:15.699286 kubelet[2619]: I0510 00:07:15.699283 2619 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/39fee0d3-46e2-4f23-b4a9-d4f2d60414e9-tigera-ca-bundle\") pod \"calico-node-cczsl\" (UID: \"39fee0d3-46e2-4f23-b4a9-d4f2d60414e9\") " pod="calico-system/calico-node-cczsl" May 10 00:07:15.699389 kubelet[2619]: I0510 00:07:15.699303 2619 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/39fee0d3-46e2-4f23-b4a9-d4f2d60414e9-lib-modules\") pod \"calico-node-cczsl\" (UID: \"39fee0d3-46e2-4f23-b4a9-d4f2d60414e9\") " pod="calico-system/calico-node-cczsl" May 10 00:07:15.699389 kubelet[2619]: I0510 00:07:15.699320 2619 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/39fee0d3-46e2-4f23-b4a9-d4f2d60414e9-cni-net-dir\") pod \"calico-node-cczsl\" (UID: \"39fee0d3-46e2-4f23-b4a9-d4f2d60414e9\") " pod="calico-system/calico-node-cczsl" May 10 00:07:15.699389 kubelet[2619]: I0510 00:07:15.699347 2619 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/39fee0d3-46e2-4f23-b4a9-d4f2d60414e9-policysync\") pod \"calico-node-cczsl\" (UID: \"39fee0d3-46e2-4f23-b4a9-d4f2d60414e9\") " pod="calico-system/calico-node-cczsl" May 10 00:07:15.699389 kubelet[2619]: I0510 00:07:15.699363 2619 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/39fee0d3-46e2-4f23-b4a9-d4f2d60414e9-flexvol-driver-host\") pod \"calico-node-cczsl\" (UID: \"39fee0d3-46e2-4f23-b4a9-d4f2d60414e9\") " pod="calico-system/calico-node-cczsl" May 10 00:07:15.699389 kubelet[2619]: I0510 00:07:15.699381 2619 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/39fee0d3-46e2-4f23-b4a9-d4f2d60414e9-node-certs\") pod \"calico-node-cczsl\" (UID: \"39fee0d3-46e2-4f23-b4a9-d4f2d60414e9\") " pod="calico-system/calico-node-cczsl" May 10 00:07:15.699503 kubelet[2619]: I0510 00:07:15.699400 2619 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/39fee0d3-46e2-4f23-b4a9-d4f2d60414e9-var-run-calico\") pod \"calico-node-cczsl\" (UID: \"39fee0d3-46e2-4f23-b4a9-d4f2d60414e9\") " pod="calico-system/calico-node-cczsl" May 10 00:07:15.699503 kubelet[2619]: I0510 00:07:15.699418 2619 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/39fee0d3-46e2-4f23-b4a9-d4f2d60414e9-var-lib-calico\") pod \"calico-node-cczsl\" (UID: \"39fee0d3-46e2-4f23-b4a9-d4f2d60414e9\") " pod="calico-system/calico-node-cczsl" May 10 00:07:15.699503 kubelet[2619]: I0510 00:07:15.699436 2619 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/39fee0d3-46e2-4f23-b4a9-d4f2d60414e9-cni-bin-dir\") pod \"calico-node-cczsl\" (UID: \"39fee0d3-46e2-4f23-b4a9-d4f2d60414e9\") " pod="calico-system/calico-node-cczsl" May 10 00:07:15.713480 kubelet[2619]: I0510 00:07:15.713427 2619 topology_manager.go:215] "Topology Admit Handler" podUID="56542ad7-7f48-4051-ae36-d7536ab16d6e" podNamespace="calico-system" podName="csi-node-driver-6hd28" May 10 00:07:15.713816 kubelet[2619]: E0510 00:07:15.713719 2619 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6hd28" podUID="56542ad7-7f48-4051-ae36-d7536ab16d6e" May 10 00:07:15.800108 kubelet[2619]: I0510 00:07:15.799985 2619 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-msrmm\" (UniqueName: \"kubernetes.io/projected/56542ad7-7f48-4051-ae36-d7536ab16d6e-kube-api-access-msrmm\") pod \"csi-node-driver-6hd28\" (UID: \"56542ad7-7f48-4051-ae36-d7536ab16d6e\") " pod="calico-system/csi-node-driver-6hd28" May 10 00:07:15.800108 kubelet[2619]: I0510 00:07:15.800085 2619 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/56542ad7-7f48-4051-ae36-d7536ab16d6e-socket-dir\") pod \"csi-node-driver-6hd28\" (UID: \"56542ad7-7f48-4051-ae36-d7536ab16d6e\") " pod="calico-system/csi-node-driver-6hd28" May 10 00:07:15.801726 kubelet[2619]: I0510 00:07:15.801440 2619 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/56542ad7-7f48-4051-ae36-d7536ab16d6e-registration-dir\") pod \"csi-node-driver-6hd28\" (UID: \"56542ad7-7f48-4051-ae36-d7536ab16d6e\") " pod="calico-system/csi-node-driver-6hd28" May 10 00:07:15.801726 kubelet[2619]: I0510 00:07:15.801553 2619 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/56542ad7-7f48-4051-ae36-d7536ab16d6e-varrun\") pod \"csi-node-driver-6hd28\" (UID: \"56542ad7-7f48-4051-ae36-d7536ab16d6e\") " pod="calico-system/csi-node-driver-6hd28" May 10 00:07:15.801726 kubelet[2619]: I0510 00:07:15.801613 2619 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/56542ad7-7f48-4051-ae36-d7536ab16d6e-kubelet-dir\") pod \"csi-node-driver-6hd28\" (UID: \"56542ad7-7f48-4051-ae36-d7536ab16d6e\") " pod="calico-system/csi-node-driver-6hd28" May 10 00:07:15.802613 kubelet[2619]: E0510 00:07:15.802433 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:07:15.802613 kubelet[2619]: W0510 00:07:15.802451 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:07:15.802613 kubelet[2619]: E0510 00:07:15.802483 2619 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:07:15.802739 kubelet[2619]: E0510 00:07:15.802647 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:07:15.802739 kubelet[2619]: W0510 00:07:15.802656 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:07:15.802739 kubelet[2619]: E0510 00:07:15.802669 2619 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:07:15.802916 kubelet[2619]: E0510 00:07:15.802882 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:07:15.802916 kubelet[2619]: W0510 00:07:15.802896 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:07:15.802916 kubelet[2619]: E0510 00:07:15.802914 2619 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:07:15.803285 kubelet[2619]: E0510 00:07:15.803108 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:07:15.803285 kubelet[2619]: W0510 00:07:15.803123 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:07:15.803285 kubelet[2619]: E0510 00:07:15.803133 2619 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:07:15.810130 kubelet[2619]: E0510 00:07:15.809981 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:07:15.810130 kubelet[2619]: W0510 00:07:15.809999 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:07:15.810130 kubelet[2619]: E0510 00:07:15.810013 2619 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:07:15.816003 kubelet[2619]: E0510 00:07:15.815923 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:07:15.816003 kubelet[2619]: W0510 00:07:15.815947 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:07:15.816003 kubelet[2619]: E0510 00:07:15.815962 2619 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:07:15.838482 kubelet[2619]: E0510 00:07:15.838450 2619 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:07:15.839865 containerd[1455]: time="2025-05-10T00:07:15.839347412Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5d56bdb946-bcbdr,Uid:d743a4bb-0c0b-4c0e-8393-270ad6a7f9f0,Namespace:calico-system,Attempt:0,}" May 10 00:07:15.859219 containerd[1455]: time="2025-05-10T00:07:15.859110287Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 00:07:15.859219 containerd[1455]: time="2025-05-10T00:07:15.859171889Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 00:07:15.859219 containerd[1455]: time="2025-05-10T00:07:15.859198689Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:07:15.859516 containerd[1455]: time="2025-05-10T00:07:15.859267251Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:07:15.879056 systemd[1]: Started cri-containerd-42bdf5b5a794237d5ddce9baf71b2f9be31a803e9d1d721b966d39e4c77b7daf.scope - libcontainer container 42bdf5b5a794237d5ddce9baf71b2f9be31a803e9d1d721b966d39e4c77b7daf. May 10 00:07:15.903173 kubelet[2619]: E0510 00:07:15.903030 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:07:15.903173 kubelet[2619]: W0510 00:07:15.903051 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:07:15.903173 kubelet[2619]: E0510 00:07:15.903069 2619 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:07:15.903462 kubelet[2619]: E0510 00:07:15.903447 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:07:15.903523 kubelet[2619]: W0510 00:07:15.903505 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:07:15.903630 kubelet[2619]: E0510 00:07:15.903572 2619 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:07:15.904056 kubelet[2619]: E0510 00:07:15.904038 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:07:15.904320 kubelet[2619]: W0510 00:07:15.904223 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:07:15.904320 kubelet[2619]: E0510 00:07:15.904263 2619 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:07:15.905606 kubelet[2619]: E0510 00:07:15.904779 2619 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:07:15.905712 kubelet[2619]: E0510 00:07:15.905696 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:07:15.905760 kubelet[2619]: W0510 00:07:15.905750 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:07:15.906004 kubelet[2619]: E0510 00:07:15.905952 2619 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:07:15.906219 kubelet[2619]: E0510 00:07:15.906134 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:07:15.906219 kubelet[2619]: W0510 00:07:15.906146 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:07:15.906583 kubelet[2619]: E0510 00:07:15.906478 2619 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:07:15.906912 kubelet[2619]: E0510 00:07:15.906897 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:07:15.907172 kubelet[2619]: W0510 00:07:15.907032 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:07:15.907770 kubelet[2619]: E0510 00:07:15.907465 2619 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:07:15.908192 kubelet[2619]: E0510 00:07:15.908174 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:07:15.909614 kubelet[2619]: W0510 00:07:15.908248 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:07:15.909614 kubelet[2619]: E0510 00:07:15.908345 2619 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:07:15.909614 kubelet[2619]: E0510 00:07:15.908535 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:07:15.909614 kubelet[2619]: W0510 00:07:15.908544 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:07:15.909614 kubelet[2619]: E0510 00:07:15.908707 2619 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:07:15.909614 kubelet[2619]: E0510 00:07:15.909147 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:07:15.909614 kubelet[2619]: W0510 00:07:15.909162 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:07:15.909614 kubelet[2619]: E0510 00:07:15.909253 2619 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:07:15.910990 containerd[1455]: time="2025-05-10T00:07:15.908643478Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-cczsl,Uid:39fee0d3-46e2-4f23-b4a9-d4f2d60414e9,Namespace:calico-system,Attempt:0,}" May 10 00:07:15.911054 kubelet[2619]: E0510 00:07:15.909713 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:07:15.911054 kubelet[2619]: W0510 00:07:15.909725 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:07:15.911054 kubelet[2619]: E0510 00:07:15.909885 2619 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:07:15.911054 kubelet[2619]: E0510 00:07:15.910441 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:07:15.911054 kubelet[2619]: W0510 00:07:15.910453 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:07:15.911054 kubelet[2619]: E0510 00:07:15.910880 2619 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:07:15.911963 kubelet[2619]: E0510 00:07:15.911839 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:07:15.911963 kubelet[2619]: W0510 00:07:15.911879 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:07:15.911963 kubelet[2619]: E0510 00:07:15.911939 2619 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:07:15.912245 kubelet[2619]: E0510 00:07:15.912079 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:07:15.912245 kubelet[2619]: W0510 00:07:15.912089 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:07:15.912245 kubelet[2619]: E0510 00:07:15.912139 2619 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:07:15.912538 kubelet[2619]: E0510 00:07:15.912366 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:07:15.912538 kubelet[2619]: W0510 00:07:15.912378 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:07:15.912538 kubelet[2619]: E0510 00:07:15.912437 2619 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:07:15.912921 kubelet[2619]: E0510 00:07:15.912867 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:07:15.912921 kubelet[2619]: W0510 00:07:15.912883 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:07:15.913268 containerd[1455]: time="2025-05-10T00:07:15.913095123Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5d56bdb946-bcbdr,Uid:d743a4bb-0c0b-4c0e-8393-270ad6a7f9f0,Namespace:calico-system,Attempt:0,} returns sandbox id \"42bdf5b5a794237d5ddce9baf71b2f9be31a803e9d1d721b966d39e4c77b7daf\"" May 10 00:07:15.913398 kubelet[2619]: E0510 00:07:15.913379 2619 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:07:15.913672 kubelet[2619]: E0510 00:07:15.913584 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:07:15.913672 kubelet[2619]: W0510 00:07:15.913598 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:07:15.913672 kubelet[2619]: E0510 00:07:15.913630 2619 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:07:15.914514 kubelet[2619]: E0510 00:07:15.914433 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:07:15.914514 kubelet[2619]: W0510 00:07:15.914447 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:07:15.914822 kubelet[2619]: E0510 00:07:15.914802 2619 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:07:15.915271 kubelet[2619]: E0510 00:07:15.915239 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:07:15.915271 kubelet[2619]: W0510 00:07:15.915258 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:07:15.915346 kubelet[2619]: E0510 00:07:15.915307 2619 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:07:15.915373 kubelet[2619]: E0510 00:07:15.915348 2619 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:07:15.915981 containerd[1455]: time="2025-05-10T00:07:15.915869241Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\"" May 10 00:07:15.916361 kubelet[2619]: E0510 00:07:15.916338 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:07:15.916361 kubelet[2619]: W0510 00:07:15.916357 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:07:15.916500 kubelet[2619]: E0510 00:07:15.916449 2619 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:07:15.916922 kubelet[2619]: E0510 00:07:15.916903 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:07:15.916964 kubelet[2619]: W0510 00:07:15.916922 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:07:15.917060 kubelet[2619]: E0510 00:07:15.917014 2619 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:07:15.917885 kubelet[2619]: E0510 00:07:15.917861 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:07:15.917885 kubelet[2619]: W0510 00:07:15.917880 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:07:15.917964 kubelet[2619]: E0510 00:07:15.917953 2619 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:07:15.918196 kubelet[2619]: E0510 00:07:15.918180 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:07:15.918196 kubelet[2619]: W0510 00:07:15.918194 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:07:15.918336 kubelet[2619]: E0510 00:07:15.918296 2619 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:07:15.918517 kubelet[2619]: E0510 00:07:15.918499 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:07:15.918517 kubelet[2619]: W0510 00:07:15.918516 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:07:15.918580 kubelet[2619]: E0510 00:07:15.918534 2619 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:07:15.918787 kubelet[2619]: E0510 00:07:15.918772 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:07:15.918821 kubelet[2619]: W0510 00:07:15.918787 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:07:15.918821 kubelet[2619]: E0510 00:07:15.918797 2619 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:07:15.928886 kubelet[2619]: E0510 00:07:15.927272 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:07:15.928886 kubelet[2619]: W0510 00:07:15.927293 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:07:15.928886 kubelet[2619]: E0510 00:07:15.927309 2619 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:07:15.935562 kubelet[2619]: E0510 00:07:15.935476 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:07:15.935562 kubelet[2619]: W0510 00:07:15.935494 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:07:15.935562 kubelet[2619]: E0510 00:07:15.935510 2619 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:07:15.949516 containerd[1455]: time="2025-05-10T00:07:15.948953690Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 00:07:15.949516 containerd[1455]: time="2025-05-10T00:07:15.949348141Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 00:07:15.949516 containerd[1455]: time="2025-05-10T00:07:15.949360501Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:07:15.949516 containerd[1455]: time="2025-05-10T00:07:15.949437423Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:07:15.969043 systemd[1]: Started cri-containerd-55a441e92168f398028650681957856976323d88c1c33dfb8c998aab18ec28d0.scope - libcontainer container 55a441e92168f398028650681957856976323d88c1c33dfb8c998aab18ec28d0. May 10 00:07:15.993064 containerd[1455]: time="2025-05-10T00:07:15.993012287Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-cczsl,Uid:39fee0d3-46e2-4f23-b4a9-d4f2d60414e9,Namespace:calico-system,Attempt:0,} returns sandbox id \"55a441e92168f398028650681957856976323d88c1c33dfb8c998aab18ec28d0\"" May 10 00:07:15.994450 kubelet[2619]: E0510 00:07:15.994106 2619 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:07:17.281757 containerd[1455]: time="2025-05-10T00:07:17.281700888Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:07:17.282835 containerd[1455]: time="2025-05-10T00:07:17.282794716Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.3: active requests=0, bytes read=28370571" May 10 00:07:17.283828 containerd[1455]: time="2025-05-10T00:07:17.283788022Z" level=info msg="ImageCreate event name:\"sha256:26e730979a07ea7452715da6ac48076016018bc982c06ebd32d5e095f42d3d54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:07:17.286437 containerd[1455]: time="2025-05-10T00:07:17.286395169Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:07:17.287138 containerd[1455]: time="2025-05-10T00:07:17.286914943Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.3\" with image id \"sha256:26e730979a07ea7452715da6ac48076016018bc982c06ebd32d5e095f42d3d54\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\", size \"29739745\" in 1.371011421s" May 10 00:07:17.287138 containerd[1455]: time="2025-05-10T00:07:17.286944943Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\" returns image reference \"sha256:26e730979a07ea7452715da6ac48076016018bc982c06ebd32d5e095f42d3d54\"" May 10 00:07:17.288117 containerd[1455]: time="2025-05-10T00:07:17.288091813Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\"" May 10 00:07:17.300654 containerd[1455]: time="2025-05-10T00:07:17.300602096Z" level=info msg="CreateContainer within sandbox \"42bdf5b5a794237d5ddce9baf71b2f9be31a803e9d1d721b966d39e4c77b7daf\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" May 10 00:07:17.312825 containerd[1455]: time="2025-05-10T00:07:17.312747609Z" level=info msg="CreateContainer within sandbox \"42bdf5b5a794237d5ddce9baf71b2f9be31a803e9d1d721b966d39e4c77b7daf\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"93e280d26ecd6490b5c991706542c14388798f9aef29e31bf93a798856a4ce0b\"" May 10 00:07:17.313345 containerd[1455]: time="2025-05-10T00:07:17.313318144Z" level=info msg="StartContainer for \"93e280d26ecd6490b5c991706542c14388798f9aef29e31bf93a798856a4ce0b\"" May 10 00:07:17.343000 systemd[1]: Started cri-containerd-93e280d26ecd6490b5c991706542c14388798f9aef29e31bf93a798856a4ce0b.scope - libcontainer container 93e280d26ecd6490b5c991706542c14388798f9aef29e31bf93a798856a4ce0b. May 10 00:07:17.373905 containerd[1455]: time="2025-05-10T00:07:17.373866505Z" level=info msg="StartContainer for \"93e280d26ecd6490b5c991706542c14388798f9aef29e31bf93a798856a4ce0b\" returns successfully" May 10 00:07:17.754511 kubelet[2619]: E0510 00:07:17.754429 2619 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6hd28" podUID="56542ad7-7f48-4051-ae36-d7536ab16d6e" May 10 00:07:17.819558 kubelet[2619]: E0510 00:07:17.819526 2619 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:07:17.895428 kubelet[2619]: E0510 00:07:17.895388 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:07:17.895428 kubelet[2619]: W0510 00:07:17.895413 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:07:17.895428 kubelet[2619]: E0510 00:07:17.895440 2619 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:07:17.895664 kubelet[2619]: E0510 00:07:17.895653 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:07:17.895664 kubelet[2619]: W0510 00:07:17.895664 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:07:17.895732 kubelet[2619]: E0510 00:07:17.895673 2619 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:07:17.895900 kubelet[2619]: E0510 00:07:17.895887 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:07:17.895900 kubelet[2619]: W0510 00:07:17.895899 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:07:17.895972 kubelet[2619]: E0510 00:07:17.895908 2619 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:07:17.896079 kubelet[2619]: E0510 00:07:17.896062 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:07:17.896079 kubelet[2619]: W0510 00:07:17.896077 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:07:17.896135 kubelet[2619]: E0510 00:07:17.896087 2619 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:07:17.896261 kubelet[2619]: E0510 00:07:17.896250 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:07:17.896291 kubelet[2619]: W0510 00:07:17.896261 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:07:17.896291 kubelet[2619]: E0510 00:07:17.896270 2619 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:07:17.896413 kubelet[2619]: E0510 00:07:17.896404 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:07:17.896413 kubelet[2619]: W0510 00:07:17.896413 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:07:17.896466 kubelet[2619]: E0510 00:07:17.896421 2619 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:07:17.896549 kubelet[2619]: E0510 00:07:17.896541 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:07:17.896549 kubelet[2619]: W0510 00:07:17.896550 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:07:17.896603 kubelet[2619]: E0510 00:07:17.896557 2619 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:07:17.896716 kubelet[2619]: E0510 00:07:17.896706 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:07:17.896747 kubelet[2619]: W0510 00:07:17.896717 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:07:17.896747 kubelet[2619]: E0510 00:07:17.896727 2619 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:07:17.896906 kubelet[2619]: E0510 00:07:17.896895 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:07:17.896906 kubelet[2619]: W0510 00:07:17.896906 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:07:17.896971 kubelet[2619]: E0510 00:07:17.896914 2619 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:07:17.897105 kubelet[2619]: E0510 00:07:17.897091 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:07:17.897105 kubelet[2619]: W0510 00:07:17.897102 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:07:17.897168 kubelet[2619]: E0510 00:07:17.897109 2619 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:07:17.897248 kubelet[2619]: E0510 00:07:17.897239 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:07:17.897248 kubelet[2619]: W0510 00:07:17.897248 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:07:17.897303 kubelet[2619]: E0510 00:07:17.897255 2619 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:07:17.897391 kubelet[2619]: E0510 00:07:17.897382 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:07:17.897391 kubelet[2619]: W0510 00:07:17.897391 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:07:17.897455 kubelet[2619]: E0510 00:07:17.897398 2619 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:07:17.897569 kubelet[2619]: E0510 00:07:17.897559 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:07:17.897569 kubelet[2619]: W0510 00:07:17.897568 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:07:17.897627 kubelet[2619]: E0510 00:07:17.897575 2619 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:07:17.897710 kubelet[2619]: E0510 00:07:17.897701 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:07:17.897710 kubelet[2619]: W0510 00:07:17.897710 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:07:17.897760 kubelet[2619]: E0510 00:07:17.897717 2619 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:07:17.897860 kubelet[2619]: E0510 00:07:17.897838 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:07:17.897860 kubelet[2619]: W0510 00:07:17.897860 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:07:17.897916 kubelet[2619]: E0510 00:07:17.897867 2619 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:07:17.921358 kubelet[2619]: E0510 00:07:17.921283 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:07:17.921358 kubelet[2619]: W0510 00:07:17.921302 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:07:17.921358 kubelet[2619]: E0510 00:07:17.921314 2619 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:07:17.921572 kubelet[2619]: E0510 00:07:17.921555 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:07:17.921572 kubelet[2619]: W0510 00:07:17.921566 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:07:17.921572 kubelet[2619]: E0510 00:07:17.921579 2619 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:07:17.922016 kubelet[2619]: E0510 00:07:17.921902 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:07:17.922016 kubelet[2619]: W0510 00:07:17.921917 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:07:17.922016 kubelet[2619]: E0510 00:07:17.921936 2619 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:07:17.922207 kubelet[2619]: E0510 00:07:17.922195 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:07:17.922258 kubelet[2619]: W0510 00:07:17.922248 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:07:17.922352 kubelet[2619]: E0510 00:07:17.922313 2619 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:07:17.922547 kubelet[2619]: E0510 00:07:17.922534 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:07:17.922689 kubelet[2619]: W0510 00:07:17.922602 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:07:17.922689 kubelet[2619]: E0510 00:07:17.922626 2619 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:07:17.922916 kubelet[2619]: E0510 00:07:17.922903 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:07:17.922976 kubelet[2619]: W0510 00:07:17.922965 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:07:17.923059 kubelet[2619]: E0510 00:07:17.923032 2619 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:07:17.923324 kubelet[2619]: E0510 00:07:17.923268 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:07:17.923324 kubelet[2619]: W0510 00:07:17.923280 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:07:17.923393 kubelet[2619]: E0510 00:07:17.923325 2619 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:07:17.923629 kubelet[2619]: E0510 00:07:17.923574 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:07:17.923629 kubelet[2619]: W0510 00:07:17.923586 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:07:17.923629 kubelet[2619]: E0510 00:07:17.923614 2619 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:07:17.923951 kubelet[2619]: E0510 00:07:17.923866 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:07:17.923951 kubelet[2619]: W0510 00:07:17.923878 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:07:17.923951 kubelet[2619]: E0510 00:07:17.923895 2619 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:07:17.924277 kubelet[2619]: E0510 00:07:17.924203 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:07:17.924277 kubelet[2619]: W0510 00:07:17.924215 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:07:17.924277 kubelet[2619]: E0510 00:07:17.924233 2619 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:07:17.924600 kubelet[2619]: E0510 00:07:17.924510 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:07:17.924600 kubelet[2619]: W0510 00:07:17.924522 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:07:17.924600 kubelet[2619]: E0510 00:07:17.924538 2619 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:07:17.924749 kubelet[2619]: E0510 00:07:17.924739 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:07:17.924804 kubelet[2619]: W0510 00:07:17.924793 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:07:17.924911 kubelet[2619]: E0510 00:07:17.924881 2619 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:07:17.925165 kubelet[2619]: E0510 00:07:17.925100 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:07:17.925165 kubelet[2619]: W0510 00:07:17.925112 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:07:17.925165 kubelet[2619]: E0510 00:07:17.925128 2619 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:07:17.925452 kubelet[2619]: E0510 00:07:17.925373 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:07:17.925452 kubelet[2619]: W0510 00:07:17.925383 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:07:17.925452 kubelet[2619]: E0510 00:07:17.925398 2619 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:07:17.925690 kubelet[2619]: E0510 00:07:17.925677 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:07:17.925873 kubelet[2619]: W0510 00:07:17.925737 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:07:17.925873 kubelet[2619]: E0510 00:07:17.925759 2619 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:07:17.925993 kubelet[2619]: E0510 00:07:17.925977 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:07:17.925993 kubelet[2619]: W0510 00:07:17.925991 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:07:17.926049 kubelet[2619]: E0510 00:07:17.926002 2619 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:07:17.926352 kubelet[2619]: E0510 00:07:17.926338 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:07:17.926425 kubelet[2619]: W0510 00:07:17.926413 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:07:17.926489 kubelet[2619]: E0510 00:07:17.926478 2619 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:07:17.926645 kubelet[2619]: E0510 00:07:17.926631 2619 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:07:17.926645 kubelet[2619]: W0510 00:07:17.926645 2619 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:07:17.926699 kubelet[2619]: E0510 00:07:17.926655 2619 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:07:18.559364 containerd[1455]: time="2025-05-10T00:07:18.559275337Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:07:18.560222 containerd[1455]: time="2025-05-10T00:07:18.560175759Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3: active requests=0, bytes read=5122903" May 10 00:07:18.560971 containerd[1455]: time="2025-05-10T00:07:18.560836175Z" level=info msg="ImageCreate event name:\"sha256:dd8e710a588cc6f5834c4d84f7e12458efae593d3dfe527ca9e757c89239ecb8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:07:18.562829 containerd[1455]: time="2025-05-10T00:07:18.562795184Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:07:18.563502 containerd[1455]: time="2025-05-10T00:07:18.563469360Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" with image id \"sha256:dd8e710a588cc6f5834c4d84f7e12458efae593d3dfe527ca9e757c89239ecb8\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\", size \"6492045\" in 1.274028633s" May 10 00:07:18.563552 containerd[1455]: time="2025-05-10T00:07:18.563501921Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" returns image reference \"sha256:dd8e710a588cc6f5834c4d84f7e12458efae593d3dfe527ca9e757c89239ecb8\"" May 10 00:07:18.565635 containerd[1455]: time="2025-05-10T00:07:18.565519571Z" level=info msg="CreateContainer within sandbox \"55a441e92168f398028650681957856976323d88c1c33dfb8c998aab18ec28d0\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 10 00:07:18.578324 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount261208260.mount: Deactivated successfully. May 10 00:07:18.580300 containerd[1455]: time="2025-05-10T00:07:18.580263056Z" level=info msg="CreateContainer within sandbox \"55a441e92168f398028650681957856976323d88c1c33dfb8c998aab18ec28d0\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"26e3e9436a2e9c1581c372f7e4fc673e319830f2e2881283d04ab3d134451f0a\"" May 10 00:07:18.580864 containerd[1455]: time="2025-05-10T00:07:18.580832790Z" level=info msg="StartContainer for \"26e3e9436a2e9c1581c372f7e4fc673e319830f2e2881283d04ab3d134451f0a\"" May 10 00:07:18.618029 systemd[1]: Started cri-containerd-26e3e9436a2e9c1581c372f7e4fc673e319830f2e2881283d04ab3d134451f0a.scope - libcontainer container 26e3e9436a2e9c1581c372f7e4fc673e319830f2e2881283d04ab3d134451f0a. May 10 00:07:18.659456 containerd[1455]: time="2025-05-10T00:07:18.659391775Z" level=info msg="StartContainer for \"26e3e9436a2e9c1581c372f7e4fc673e319830f2e2881283d04ab3d134451f0a\" returns successfully" May 10 00:07:18.668100 systemd[1]: cri-containerd-26e3e9436a2e9c1581c372f7e4fc673e319830f2e2881283d04ab3d134451f0a.scope: Deactivated successfully. May 10 00:07:18.728132 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-26e3e9436a2e9c1581c372f7e4fc673e319830f2e2881283d04ab3d134451f0a-rootfs.mount: Deactivated successfully. May 10 00:07:18.761556 containerd[1455]: time="2025-05-10T00:07:18.754582011Z" level=info msg="shim disconnected" id=26e3e9436a2e9c1581c372f7e4fc673e319830f2e2881283d04ab3d134451f0a namespace=k8s.io May 10 00:07:18.761556 containerd[1455]: time="2025-05-10T00:07:18.761553623Z" level=warning msg="cleaning up after shim disconnected" id=26e3e9436a2e9c1581c372f7e4fc673e319830f2e2881283d04ab3d134451f0a namespace=k8s.io May 10 00:07:18.761756 containerd[1455]: time="2025-05-10T00:07:18.761570984Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 10 00:07:18.822966 kubelet[2619]: I0510 00:07:18.822119 2619 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 10 00:07:18.822966 kubelet[2619]: E0510 00:07:18.822409 2619 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:07:18.822966 kubelet[2619]: E0510 00:07:18.822901 2619 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:07:18.824123 containerd[1455]: time="2025-05-10T00:07:18.824071411Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\"" May 10 00:07:18.836637 kubelet[2619]: I0510 00:07:18.836457 2619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5d56bdb946-bcbdr" podStartSLOduration=2.464206223 podStartE2EDuration="3.836441477s" podCreationTimestamp="2025-05-10 00:07:15 +0000 UTC" firstStartedPulling="2025-05-10 00:07:15.915569152 +0000 UTC m=+22.243501977" lastFinishedPulling="2025-05-10 00:07:17.287804406 +0000 UTC m=+23.615737231" observedRunningTime="2025-05-10 00:07:17.829001523 +0000 UTC m=+24.156934348" watchObservedRunningTime="2025-05-10 00:07:18.836441477 +0000 UTC m=+25.164374302" May 10 00:07:19.754429 kubelet[2619]: E0510 00:07:19.754378 2619 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6hd28" podUID="56542ad7-7f48-4051-ae36-d7536ab16d6e" May 10 00:07:21.669202 containerd[1455]: time="2025-05-10T00:07:21.668775330Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:07:21.670174 containerd[1455]: time="2025-05-10T00:07:21.670129240Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.3: active requests=0, bytes read=91256270" May 10 00:07:21.671067 containerd[1455]: time="2025-05-10T00:07:21.671020780Z" level=info msg="ImageCreate event name:\"sha256:add6372545fb406bb017769f222d84c50549ce13e3b19f1fbaee3d8a4aaef627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:07:21.675733 containerd[1455]: time="2025-05-10T00:07:21.675460197Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:07:21.676481 containerd[1455]: time="2025-05-10T00:07:21.676455659Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.3\" with image id \"sha256:add6372545fb406bb017769f222d84c50549ce13e3b19f1fbaee3d8a4aaef627\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\", size \"92625452\" in 2.852330167s" May 10 00:07:21.676600 containerd[1455]: time="2025-05-10T00:07:21.676574182Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\" returns image reference \"sha256:add6372545fb406bb017769f222d84c50549ce13e3b19f1fbaee3d8a4aaef627\"" May 10 00:07:21.679923 containerd[1455]: time="2025-05-10T00:07:21.679889575Z" level=info msg="CreateContainer within sandbox \"55a441e92168f398028650681957856976323d88c1c33dfb8c998aab18ec28d0\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 10 00:07:21.694900 containerd[1455]: time="2025-05-10T00:07:21.694791263Z" level=info msg="CreateContainer within sandbox \"55a441e92168f398028650681957856976323d88c1c33dfb8c998aab18ec28d0\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"a06cc5b3de93843a8c25a4e417f62a310f27c3cb15d6610d28b9febae227c7bb\"" May 10 00:07:21.696016 containerd[1455]: time="2025-05-10T00:07:21.695989369Z" level=info msg="StartContainer for \"a06cc5b3de93843a8c25a4e417f62a310f27c3cb15d6610d28b9febae227c7bb\"" May 10 00:07:21.720047 systemd[1]: Started cri-containerd-a06cc5b3de93843a8c25a4e417f62a310f27c3cb15d6610d28b9febae227c7bb.scope - libcontainer container a06cc5b3de93843a8c25a4e417f62a310f27c3cb15d6610d28b9febae227c7bb. May 10 00:07:21.754640 kubelet[2619]: E0510 00:07:21.754274 2619 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6hd28" podUID="56542ad7-7f48-4051-ae36-d7536ab16d6e" May 10 00:07:21.756814 containerd[1455]: time="2025-05-10T00:07:21.756691065Z" level=info msg="StartContainer for \"a06cc5b3de93843a8c25a4e417f62a310f27c3cb15d6610d28b9febae227c7bb\" returns successfully" May 10 00:07:21.835583 kubelet[2619]: E0510 00:07:21.835534 2619 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:07:22.368937 systemd[1]: cri-containerd-a06cc5b3de93843a8c25a4e417f62a310f27c3cb15d6610d28b9febae227c7bb.scope: Deactivated successfully. May 10 00:07:22.388119 kubelet[2619]: I0510 00:07:22.387427 2619 kubelet_node_status.go:497] "Fast updating node status as it just became ready" May 10 00:07:22.392078 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a06cc5b3de93843a8c25a4e417f62a310f27c3cb15d6610d28b9febae227c7bb-rootfs.mount: Deactivated successfully. May 10 00:07:22.422273 kubelet[2619]: I0510 00:07:22.422205 2619 topology_manager.go:215] "Topology Admit Handler" podUID="c06d69b8-4f38-4476-a0a9-074ed47a6924" podNamespace="kube-system" podName="coredns-7db6d8ff4d-kflv6" May 10 00:07:22.429550 systemd[1]: Created slice kubepods-burstable-podc06d69b8_4f38_4476_a0a9_074ed47a6924.slice - libcontainer container kubepods-burstable-podc06d69b8_4f38_4476_a0a9_074ed47a6924.slice. May 10 00:07:22.440636 kubelet[2619]: I0510 00:07:22.439758 2619 topology_manager.go:215] "Topology Admit Handler" podUID="5487806e-6495-4d4f-a191-df1e4f5aa0a8" podNamespace="kube-system" podName="coredns-7db6d8ff4d-srzmr" May 10 00:07:22.440636 kubelet[2619]: I0510 00:07:22.440101 2619 topology_manager.go:215] "Topology Admit Handler" podUID="2aab4214-d322-46c1-9e38-01e24fa563db" podNamespace="calico-system" podName="calico-kube-controllers-5c95969b9-5mpjw" May 10 00:07:22.441525 kubelet[2619]: I0510 00:07:22.441497 2619 topology_manager.go:215] "Topology Admit Handler" podUID="4adb856a-b358-4a75-afdb-0a2493e0d860" podNamespace="calico-apiserver" podName="calico-apiserver-7c5c466cb8-7rrbw" May 10 00:07:22.442229 kubelet[2619]: I0510 00:07:22.441803 2619 topology_manager.go:215] "Topology Admit Handler" podUID="01ce480b-3a6d-4fd5-af7a-73b802892ab1" podNamespace="calico-apiserver" podName="calico-apiserver-7c5c466cb8-82f6f" May 10 00:07:22.452896 systemd[1]: Created slice kubepods-burstable-pod5487806e_6495_4d4f_a191_df1e4f5aa0a8.slice - libcontainer container kubepods-burstable-pod5487806e_6495_4d4f_a191_df1e4f5aa0a8.slice. May 10 00:07:22.454827 kubelet[2619]: I0510 00:07:22.454797 2619 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7gs7\" (UniqueName: \"kubernetes.io/projected/c06d69b8-4f38-4476-a0a9-074ed47a6924-kube-api-access-k7gs7\") pod \"coredns-7db6d8ff4d-kflv6\" (UID: \"c06d69b8-4f38-4476-a0a9-074ed47a6924\") " pod="kube-system/coredns-7db6d8ff4d-kflv6" May 10 00:07:22.455010 kubelet[2619]: I0510 00:07:22.454832 2619 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c06d69b8-4f38-4476-a0a9-074ed47a6924-config-volume\") pod \"coredns-7db6d8ff4d-kflv6\" (UID: \"c06d69b8-4f38-4476-a0a9-074ed47a6924\") " pod="kube-system/coredns-7db6d8ff4d-kflv6" May 10 00:07:22.458114 systemd[1]: Created slice kubepods-besteffort-pod2aab4214_d322_46c1_9e38_01e24fa563db.slice - libcontainer container kubepods-besteffort-pod2aab4214_d322_46c1_9e38_01e24fa563db.slice. May 10 00:07:22.464154 systemd[1]: Created slice kubepods-besteffort-pod01ce480b_3a6d_4fd5_af7a_73b802892ab1.slice - libcontainer container kubepods-besteffort-pod01ce480b_3a6d_4fd5_af7a_73b802892ab1.slice. May 10 00:07:22.474783 systemd[1]: Created slice kubepods-besteffort-pod4adb856a_b358_4a75_afdb_0a2493e0d860.slice - libcontainer container kubepods-besteffort-pod4adb856a_b358_4a75_afdb_0a2493e0d860.slice. May 10 00:07:22.507351 containerd[1455]: time="2025-05-10T00:07:22.507222656Z" level=info msg="shim disconnected" id=a06cc5b3de93843a8c25a4e417f62a310f27c3cb15d6610d28b9febae227c7bb namespace=k8s.io May 10 00:07:22.507351 containerd[1455]: time="2025-05-10T00:07:22.507278058Z" level=warning msg="cleaning up after shim disconnected" id=a06cc5b3de93843a8c25a4e417f62a310f27c3cb15d6610d28b9febae227c7bb namespace=k8s.io May 10 00:07:22.507351 containerd[1455]: time="2025-05-10T00:07:22.507286618Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 10 00:07:22.516386 systemd[1]: Started sshd@7-10.0.0.141:22-10.0.0.1:34552.service - OpenSSH per-connection server daemon (10.0.0.1:34552). May 10 00:07:22.556306 kubelet[2619]: I0510 00:07:22.556082 2619 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2aab4214-d322-46c1-9e38-01e24fa563db-tigera-ca-bundle\") pod \"calico-kube-controllers-5c95969b9-5mpjw\" (UID: \"2aab4214-d322-46c1-9e38-01e24fa563db\") " pod="calico-system/calico-kube-controllers-5c95969b9-5mpjw" May 10 00:07:22.556306 kubelet[2619]: I0510 00:07:22.556146 2619 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/4adb856a-b358-4a75-afdb-0a2493e0d860-calico-apiserver-certs\") pod \"calico-apiserver-7c5c466cb8-7rrbw\" (UID: \"4adb856a-b358-4a75-afdb-0a2493e0d860\") " pod="calico-apiserver/calico-apiserver-7c5c466cb8-7rrbw" May 10 00:07:22.564979 kubelet[2619]: I0510 00:07:22.556169 2619 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v2lr2\" (UniqueName: \"kubernetes.io/projected/4adb856a-b358-4a75-afdb-0a2493e0d860-kube-api-access-v2lr2\") pod \"calico-apiserver-7c5c466cb8-7rrbw\" (UID: \"4adb856a-b358-4a75-afdb-0a2493e0d860\") " pod="calico-apiserver/calico-apiserver-7c5c466cb8-7rrbw" May 10 00:07:22.565059 kubelet[2619]: I0510 00:07:22.565010 2619 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5487806e-6495-4d4f-a191-df1e4f5aa0a8-config-volume\") pod \"coredns-7db6d8ff4d-srzmr\" (UID: \"5487806e-6495-4d4f-a191-df1e4f5aa0a8\") " pod="kube-system/coredns-7db6d8ff4d-srzmr" May 10 00:07:22.565059 kubelet[2619]: I0510 00:07:22.565040 2619 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gmrsr\" (UniqueName: \"kubernetes.io/projected/01ce480b-3a6d-4fd5-af7a-73b802892ab1-kube-api-access-gmrsr\") pod \"calico-apiserver-7c5c466cb8-82f6f\" (UID: \"01ce480b-3a6d-4fd5-af7a-73b802892ab1\") " pod="calico-apiserver/calico-apiserver-7c5c466cb8-82f6f" May 10 00:07:22.565136 kubelet[2619]: I0510 00:07:22.565062 2619 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bd6cj\" (UniqueName: \"kubernetes.io/projected/2aab4214-d322-46c1-9e38-01e24fa563db-kube-api-access-bd6cj\") pod \"calico-kube-controllers-5c95969b9-5mpjw\" (UID: \"2aab4214-d322-46c1-9e38-01e24fa563db\") " pod="calico-system/calico-kube-controllers-5c95969b9-5mpjw" May 10 00:07:22.565136 kubelet[2619]: I0510 00:07:22.565084 2619 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/01ce480b-3a6d-4fd5-af7a-73b802892ab1-calico-apiserver-certs\") pod \"calico-apiserver-7c5c466cb8-82f6f\" (UID: \"01ce480b-3a6d-4fd5-af7a-73b802892ab1\") " pod="calico-apiserver/calico-apiserver-7c5c466cb8-82f6f" May 10 00:07:22.565136 kubelet[2619]: I0510 00:07:22.565104 2619 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rxwfn\" (UniqueName: \"kubernetes.io/projected/5487806e-6495-4d4f-a191-df1e4f5aa0a8-kube-api-access-rxwfn\") pod \"coredns-7db6d8ff4d-srzmr\" (UID: \"5487806e-6495-4d4f-a191-df1e4f5aa0a8\") " pod="kube-system/coredns-7db6d8ff4d-srzmr" May 10 00:07:22.575901 sshd[3362]: Accepted publickey for core from 10.0.0.1 port 34552 ssh2: RSA SHA256:Q9AEy6fzrZ3SovUIgabQR390giiaQRil/rHIVS1a70c May 10 00:07:22.576749 sshd-session[3362]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:07:22.580888 systemd-logind[1429]: New session 8 of user core. May 10 00:07:22.590057 systemd[1]: Started session-8.scope - Session 8 of User core. May 10 00:07:22.725876 sshd[3370]: Connection closed by 10.0.0.1 port 34552 May 10 00:07:22.725682 sshd-session[3362]: pam_unix(sshd:session): session closed for user core May 10 00:07:22.728990 systemd-logind[1429]: Session 8 logged out. Waiting for processes to exit. May 10 00:07:22.729114 systemd[1]: sshd@7-10.0.0.141:22-10.0.0.1:34552.service: Deactivated successfully. May 10 00:07:22.730832 systemd[1]: session-8.scope: Deactivated successfully. May 10 00:07:22.732243 systemd-logind[1429]: Removed session 8. May 10 00:07:22.742802 kubelet[2619]: E0510 00:07:22.742754 2619 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:07:22.744006 containerd[1455]: time="2025-05-10T00:07:22.743956717Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-kflv6,Uid:c06d69b8-4f38-4476-a0a9-074ed47a6924,Namespace:kube-system,Attempt:0,}" May 10 00:07:22.756224 kubelet[2619]: E0510 00:07:22.756177 2619 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:07:22.756716 containerd[1455]: time="2025-05-10T00:07:22.756678346Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-srzmr,Uid:5487806e-6495-4d4f-a191-df1e4f5aa0a8,Namespace:kube-system,Attempt:0,}" May 10 00:07:22.762203 containerd[1455]: time="2025-05-10T00:07:22.762155222Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5c95969b9-5mpjw,Uid:2aab4214-d322-46c1-9e38-01e24fa563db,Namespace:calico-system,Attempt:0,}" May 10 00:07:22.770468 containerd[1455]: time="2025-05-10T00:07:22.770355316Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c5c466cb8-82f6f,Uid:01ce480b-3a6d-4fd5-af7a-73b802892ab1,Namespace:calico-apiserver,Attempt:0,}" May 10 00:07:22.779263 containerd[1455]: time="2025-05-10T00:07:22.779185744Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c5c466cb8-7rrbw,Uid:4adb856a-b358-4a75-afdb-0a2493e0d860,Namespace:calico-apiserver,Attempt:0,}" May 10 00:07:22.839305 kubelet[2619]: E0510 00:07:22.839269 2619 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:07:22.845275 containerd[1455]: time="2025-05-10T00:07:22.845227944Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\"" May 10 00:07:23.086614 containerd[1455]: time="2025-05-10T00:07:23.086484717Z" level=error msg="Failed to destroy network for sandbox \"c19924cbf291cfedb2344018f82e64d64b01b1d95b185c4d2455bf23ec598274\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:07:23.086928 containerd[1455]: time="2025-05-10T00:07:23.086826244Z" level=error msg="encountered an error cleaning up failed sandbox \"c19924cbf291cfedb2344018f82e64d64b01b1d95b185c4d2455bf23ec598274\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:07:23.086928 containerd[1455]: time="2025-05-10T00:07:23.086916286Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-srzmr,Uid:5487806e-6495-4d4f-a191-df1e4f5aa0a8,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c19924cbf291cfedb2344018f82e64d64b01b1d95b185c4d2455bf23ec598274\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:07:23.087629 containerd[1455]: time="2025-05-10T00:07:23.087591380Z" level=error msg="Failed to destroy network for sandbox \"91b203a3ea9387e43e18d103f3481a84640baa460cfb6d17599a8f65d8a29670\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:07:23.088403 containerd[1455]: time="2025-05-10T00:07:23.088339235Z" level=error msg="encountered an error cleaning up failed sandbox \"91b203a3ea9387e43e18d103f3481a84640baa460cfb6d17599a8f65d8a29670\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:07:23.088555 containerd[1455]: time="2025-05-10T00:07:23.088494478Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5c95969b9-5mpjw,Uid:2aab4214-d322-46c1-9e38-01e24fa563db,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"91b203a3ea9387e43e18d103f3481a84640baa460cfb6d17599a8f65d8a29670\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:07:23.091295 kubelet[2619]: E0510 00:07:23.091093 2619 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"91b203a3ea9387e43e18d103f3481a84640baa460cfb6d17599a8f65d8a29670\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:07:23.091295 kubelet[2619]: E0510 00:07:23.091188 2619 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"91b203a3ea9387e43e18d103f3481a84640baa460cfb6d17599a8f65d8a29670\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5c95969b9-5mpjw" May 10 00:07:23.091295 kubelet[2619]: E0510 00:07:23.091233 2619 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"91b203a3ea9387e43e18d103f3481a84640baa460cfb6d17599a8f65d8a29670\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5c95969b9-5mpjw" May 10 00:07:23.091480 kubelet[2619]: E0510 00:07:23.091288 2619 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5c95969b9-5mpjw_calico-system(2aab4214-d322-46c1-9e38-01e24fa563db)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5c95969b9-5mpjw_calico-system(2aab4214-d322-46c1-9e38-01e24fa563db)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"91b203a3ea9387e43e18d103f3481a84640baa460cfb6d17599a8f65d8a29670\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5c95969b9-5mpjw" podUID="2aab4214-d322-46c1-9e38-01e24fa563db" May 10 00:07:23.091596 kubelet[2619]: E0510 00:07:23.091555 2619 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c19924cbf291cfedb2344018f82e64d64b01b1d95b185c4d2455bf23ec598274\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:07:23.091658 kubelet[2619]: E0510 00:07:23.091637 2619 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c19924cbf291cfedb2344018f82e64d64b01b1d95b185c4d2455bf23ec598274\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-srzmr" May 10 00:07:23.091691 kubelet[2619]: E0510 00:07:23.091658 2619 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c19924cbf291cfedb2344018f82e64d64b01b1d95b185c4d2455bf23ec598274\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-srzmr" May 10 00:07:23.091716 kubelet[2619]: E0510 00:07:23.091691 2619 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-srzmr_kube-system(5487806e-6495-4d4f-a191-df1e4f5aa0a8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-srzmr_kube-system(5487806e-6495-4d4f-a191-df1e4f5aa0a8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c19924cbf291cfedb2344018f82e64d64b01b1d95b185c4d2455bf23ec598274\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-srzmr" podUID="5487806e-6495-4d4f-a191-df1e4f5aa0a8" May 10 00:07:23.092862 containerd[1455]: time="2025-05-10T00:07:23.092642803Z" level=error msg="Failed to destroy network for sandbox \"bbdc02b92325d3eaa887dc6c4a0b35702fe8ec0e0c9d4ff1a6cb0c619f3dbad6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:07:23.094585 containerd[1455]: time="2025-05-10T00:07:23.094553042Z" level=error msg="Failed to destroy network for sandbox \"d17345d95100bf841394fddf6cec94d44cb83645858af93608efd0d4907222a4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:07:23.097429 containerd[1455]: time="2025-05-10T00:07:23.097398100Z" level=error msg="encountered an error cleaning up failed sandbox \"d17345d95100bf841394fddf6cec94d44cb83645858af93608efd0d4907222a4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:07:23.097659 containerd[1455]: time="2025-05-10T00:07:23.097632705Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-kflv6,Uid:c06d69b8-4f38-4476-a0a9-074ed47a6924,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d17345d95100bf841394fddf6cec94d44cb83645858af93608efd0d4907222a4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:07:23.098495 kubelet[2619]: E0510 00:07:23.097829 2619 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d17345d95100bf841394fddf6cec94d44cb83645858af93608efd0d4907222a4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:07:23.098495 kubelet[2619]: E0510 00:07:23.097892 2619 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d17345d95100bf841394fddf6cec94d44cb83645858af93608efd0d4907222a4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-kflv6" May 10 00:07:23.098495 kubelet[2619]: E0510 00:07:23.097908 2619 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d17345d95100bf841394fddf6cec94d44cb83645858af93608efd0d4907222a4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-kflv6" May 10 00:07:23.098622 kubelet[2619]: E0510 00:07:23.097943 2619 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-kflv6_kube-system(c06d69b8-4f38-4476-a0a9-074ed47a6924)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-kflv6_kube-system(c06d69b8-4f38-4476-a0a9-074ed47a6924)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d17345d95100bf841394fddf6cec94d44cb83645858af93608efd0d4907222a4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-kflv6" podUID="c06d69b8-4f38-4476-a0a9-074ed47a6924" May 10 00:07:23.098908 containerd[1455]: time="2025-05-10T00:07:23.098812369Z" level=error msg="Failed to destroy network for sandbox \"dec5dfa24dcd88a2cc1d3fb9acb23f7fa1da4cfaf9e23a890db6b276c61ff526\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:07:23.099308 containerd[1455]: time="2025-05-10T00:07:23.099158696Z" level=error msg="encountered an error cleaning up failed sandbox \"dec5dfa24dcd88a2cc1d3fb9acb23f7fa1da4cfaf9e23a890db6b276c61ff526\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:07:23.099308 containerd[1455]: time="2025-05-10T00:07:23.099219378Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c5c466cb8-82f6f,Uid:01ce480b-3a6d-4fd5-af7a-73b802892ab1,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"dec5dfa24dcd88a2cc1d3fb9acb23f7fa1da4cfaf9e23a890db6b276c61ff526\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:07:23.099459 kubelet[2619]: E0510 00:07:23.099341 2619 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dec5dfa24dcd88a2cc1d3fb9acb23f7fa1da4cfaf9e23a890db6b276c61ff526\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:07:23.099459 kubelet[2619]: E0510 00:07:23.099377 2619 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dec5dfa24dcd88a2cc1d3fb9acb23f7fa1da4cfaf9e23a890db6b276c61ff526\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7c5c466cb8-82f6f" May 10 00:07:23.099459 kubelet[2619]: E0510 00:07:23.099393 2619 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dec5dfa24dcd88a2cc1d3fb9acb23f7fa1da4cfaf9e23a890db6b276c61ff526\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7c5c466cb8-82f6f" May 10 00:07:23.099537 kubelet[2619]: E0510 00:07:23.099456 2619 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7c5c466cb8-82f6f_calico-apiserver(01ce480b-3a6d-4fd5-af7a-73b802892ab1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7c5c466cb8-82f6f_calico-apiserver(01ce480b-3a6d-4fd5-af7a-73b802892ab1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dec5dfa24dcd88a2cc1d3fb9acb23f7fa1da4cfaf9e23a890db6b276c61ff526\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7c5c466cb8-82f6f" podUID="01ce480b-3a6d-4fd5-af7a-73b802892ab1" May 10 00:07:23.101711 containerd[1455]: time="2025-05-10T00:07:23.101661268Z" level=error msg="encountered an error cleaning up failed sandbox \"bbdc02b92325d3eaa887dc6c4a0b35702fe8ec0e0c9d4ff1a6cb0c619f3dbad6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:07:23.101779 containerd[1455]: time="2025-05-10T00:07:23.101719869Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c5c466cb8-7rrbw,Uid:4adb856a-b358-4a75-afdb-0a2493e0d860,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"bbdc02b92325d3eaa887dc6c4a0b35702fe8ec0e0c9d4ff1a6cb0c619f3dbad6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:07:23.102056 kubelet[2619]: E0510 00:07:23.101922 2619 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bbdc02b92325d3eaa887dc6c4a0b35702fe8ec0e0c9d4ff1a6cb0c619f3dbad6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:07:23.102056 kubelet[2619]: E0510 00:07:23.101958 2619 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bbdc02b92325d3eaa887dc6c4a0b35702fe8ec0e0c9d4ff1a6cb0c619f3dbad6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7c5c466cb8-7rrbw" May 10 00:07:23.102056 kubelet[2619]: E0510 00:07:23.101972 2619 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bbdc02b92325d3eaa887dc6c4a0b35702fe8ec0e0c9d4ff1a6cb0c619f3dbad6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7c5c466cb8-7rrbw" May 10 00:07:23.102161 kubelet[2619]: E0510 00:07:23.102013 2619 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7c5c466cb8-7rrbw_calico-apiserver(4adb856a-b358-4a75-afdb-0a2493e0d860)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7c5c466cb8-7rrbw_calico-apiserver(4adb856a-b358-4a75-afdb-0a2493e0d860)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bbdc02b92325d3eaa887dc6c4a0b35702fe8ec0e0c9d4ff1a6cb0c619f3dbad6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7c5c466cb8-7rrbw" podUID="4adb856a-b358-4a75-afdb-0a2493e0d860" May 10 00:07:23.690429 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-91b203a3ea9387e43e18d103f3481a84640baa460cfb6d17599a8f65d8a29670-shm.mount: Deactivated successfully. May 10 00:07:23.690526 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c19924cbf291cfedb2344018f82e64d64b01b1d95b185c4d2455bf23ec598274-shm.mount: Deactivated successfully. May 10 00:07:23.690573 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d17345d95100bf841394fddf6cec94d44cb83645858af93608efd0d4907222a4-shm.mount: Deactivated successfully. May 10 00:07:23.762396 systemd[1]: Created slice kubepods-besteffort-pod56542ad7_7f48_4051_ae36_d7536ab16d6e.slice - libcontainer container kubepods-besteffort-pod56542ad7_7f48_4051_ae36_d7536ab16d6e.slice. May 10 00:07:23.765745 containerd[1455]: time="2025-05-10T00:07:23.765710409Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6hd28,Uid:56542ad7-7f48-4051-ae36-d7536ab16d6e,Namespace:calico-system,Attempt:0,}" May 10 00:07:23.845312 kubelet[2619]: I0510 00:07:23.845272 2619 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bbdc02b92325d3eaa887dc6c4a0b35702fe8ec0e0c9d4ff1a6cb0c619f3dbad6" May 10 00:07:23.846180 containerd[1455]: time="2025-05-10T00:07:23.845768047Z" level=info msg="StopPodSandbox for \"bbdc02b92325d3eaa887dc6c4a0b35702fe8ec0e0c9d4ff1a6cb0c619f3dbad6\"" May 10 00:07:23.846180 containerd[1455]: time="2025-05-10T00:07:23.845951891Z" level=info msg="Ensure that sandbox bbdc02b92325d3eaa887dc6c4a0b35702fe8ec0e0c9d4ff1a6cb0c619f3dbad6 in task-service has been cleanup successfully" May 10 00:07:23.846180 containerd[1455]: time="2025-05-10T00:07:23.846154455Z" level=info msg="TearDown network for sandbox \"bbdc02b92325d3eaa887dc6c4a0b35702fe8ec0e0c9d4ff1a6cb0c619f3dbad6\" successfully" May 10 00:07:23.846180 containerd[1455]: time="2025-05-10T00:07:23.846169775Z" level=info msg="StopPodSandbox for \"bbdc02b92325d3eaa887dc6c4a0b35702fe8ec0e0c9d4ff1a6cb0c619f3dbad6\" returns successfully" May 10 00:07:23.849302 kubelet[2619]: I0510 00:07:23.848493 2619 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dec5dfa24dcd88a2cc1d3fb9acb23f7fa1da4cfaf9e23a890db6b276c61ff526" May 10 00:07:23.850517 systemd[1]: run-netns-cni\x2d5ed55b50\x2d36ba\x2d7c7b\x2d27c8\x2dbf5ed8ddc5b1.mount: Deactivated successfully. May 10 00:07:23.851546 containerd[1455]: time="2025-05-10T00:07:23.851509284Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c5c466cb8-7rrbw,Uid:4adb856a-b358-4a75-afdb-0a2493e0d860,Namespace:calico-apiserver,Attempt:1,}" May 10 00:07:23.851829 containerd[1455]: time="2025-05-10T00:07:23.851795010Z" level=info msg="StopPodSandbox for \"dec5dfa24dcd88a2cc1d3fb9acb23f7fa1da4cfaf9e23a890db6b276c61ff526\"" May 10 00:07:23.851988 containerd[1455]: time="2025-05-10T00:07:23.851966294Z" level=info msg="Ensure that sandbox dec5dfa24dcd88a2cc1d3fb9acb23f7fa1da4cfaf9e23a890db6b276c61ff526 in task-service has been cleanup successfully" May 10 00:07:23.853891 containerd[1455]: time="2025-05-10T00:07:23.852453624Z" level=info msg="TearDown network for sandbox \"dec5dfa24dcd88a2cc1d3fb9acb23f7fa1da4cfaf9e23a890db6b276c61ff526\" successfully" May 10 00:07:23.853891 containerd[1455]: time="2025-05-10T00:07:23.852477664Z" level=info msg="StopPodSandbox for \"dec5dfa24dcd88a2cc1d3fb9acb23f7fa1da4cfaf9e23a890db6b276c61ff526\" returns successfully" May 10 00:07:23.853891 containerd[1455]: time="2025-05-10T00:07:23.853291721Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c5c466cb8-82f6f,Uid:01ce480b-3a6d-4fd5-af7a-73b802892ab1,Namespace:calico-apiserver,Attempt:1,}" May 10 00:07:23.854309 kubelet[2619]: I0510 00:07:23.854106 2619 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c19924cbf291cfedb2344018f82e64d64b01b1d95b185c4d2455bf23ec598274" May 10 00:07:23.854971 containerd[1455]: time="2025-05-10T00:07:23.854947195Z" level=info msg="StopPodSandbox for \"c19924cbf291cfedb2344018f82e64d64b01b1d95b185c4d2455bf23ec598274\"" May 10 00:07:23.855116 containerd[1455]: time="2025-05-10T00:07:23.855096998Z" level=info msg="Ensure that sandbox c19924cbf291cfedb2344018f82e64d64b01b1d95b185c4d2455bf23ec598274 in task-service has been cleanup successfully" May 10 00:07:23.855630 systemd[1]: run-netns-cni\x2d9d427df5\x2d40ec\x2d270f\x2d5b34\x2dd02b4d2f213a.mount: Deactivated successfully. May 10 00:07:23.856647 containerd[1455]: time="2025-05-10T00:07:23.856287382Z" level=info msg="TearDown network for sandbox \"c19924cbf291cfedb2344018f82e64d64b01b1d95b185c4d2455bf23ec598274\" successfully" May 10 00:07:23.856647 containerd[1455]: time="2025-05-10T00:07:23.856617189Z" level=info msg="StopPodSandbox for \"c19924cbf291cfedb2344018f82e64d64b01b1d95b185c4d2455bf23ec598274\" returns successfully" May 10 00:07:23.856965 kubelet[2619]: E0510 00:07:23.856943 2619 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:07:23.857426 containerd[1455]: time="2025-05-10T00:07:23.857383645Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-srzmr,Uid:5487806e-6495-4d4f-a191-df1e4f5aa0a8,Namespace:kube-system,Attempt:1,}" May 10 00:07:23.858892 kubelet[2619]: I0510 00:07:23.858530 2619 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="91b203a3ea9387e43e18d103f3481a84640baa460cfb6d17599a8f65d8a29670" May 10 00:07:23.859970 containerd[1455]: time="2025-05-10T00:07:23.859929017Z" level=info msg="StopPodSandbox for \"91b203a3ea9387e43e18d103f3481a84640baa460cfb6d17599a8f65d8a29670\"" May 10 00:07:23.860115 containerd[1455]: time="2025-05-10T00:07:23.860085580Z" level=info msg="Ensure that sandbox 91b203a3ea9387e43e18d103f3481a84640baa460cfb6d17599a8f65d8a29670 in task-service has been cleanup successfully" May 10 00:07:23.864830 kubelet[2619]: I0510 00:07:23.864805 2619 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d17345d95100bf841394fddf6cec94d44cb83645858af93608efd0d4907222a4" May 10 00:07:23.865424 containerd[1455]: time="2025-05-10T00:07:23.864994320Z" level=info msg="TearDown network for sandbox \"91b203a3ea9387e43e18d103f3481a84640baa460cfb6d17599a8f65d8a29670\" successfully" May 10 00:07:23.865424 containerd[1455]: time="2025-05-10T00:07:23.865026521Z" level=info msg="StopPodSandbox for \"91b203a3ea9387e43e18d103f3481a84640baa460cfb6d17599a8f65d8a29670\" returns successfully" May 10 00:07:23.866457 containerd[1455]: time="2025-05-10T00:07:23.865554972Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5c95969b9-5mpjw,Uid:2aab4214-d322-46c1-9e38-01e24fa563db,Namespace:calico-system,Attempt:1,}" May 10 00:07:23.866457 containerd[1455]: time="2025-05-10T00:07:23.865584172Z" level=info msg="StopPodSandbox for \"d17345d95100bf841394fddf6cec94d44cb83645858af93608efd0d4907222a4\"" May 10 00:07:23.866457 containerd[1455]: time="2025-05-10T00:07:23.865722975Z" level=info msg="Ensure that sandbox d17345d95100bf841394fddf6cec94d44cb83645858af93608efd0d4907222a4 in task-service has been cleanup successfully" May 10 00:07:23.866613 containerd[1455]: time="2025-05-10T00:07:23.866476270Z" level=info msg="TearDown network for sandbox \"d17345d95100bf841394fddf6cec94d44cb83645858af93608efd0d4907222a4\" successfully" May 10 00:07:23.866613 containerd[1455]: time="2025-05-10T00:07:23.866495551Z" level=info msg="StopPodSandbox for \"d17345d95100bf841394fddf6cec94d44cb83645858af93608efd0d4907222a4\" returns successfully" May 10 00:07:23.866982 kubelet[2619]: E0510 00:07:23.866815 2619 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:07:23.867328 containerd[1455]: time="2025-05-10T00:07:23.867291887Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-kflv6,Uid:c06d69b8-4f38-4476-a0a9-074ed47a6924,Namespace:kube-system,Attempt:1,}" May 10 00:07:23.886551 containerd[1455]: time="2025-05-10T00:07:23.886492480Z" level=error msg="Failed to destroy network for sandbox \"d8e20ff03e6b87cc110d5ef53e8031281a309ea47a8f40733ba643d34a2ee36b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:07:23.896082 containerd[1455]: time="2025-05-10T00:07:23.896017675Z" level=error msg="encountered an error cleaning up failed sandbox \"d8e20ff03e6b87cc110d5ef53e8031281a309ea47a8f40733ba643d34a2ee36b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:07:23.896227 containerd[1455]: time="2025-05-10T00:07:23.896104396Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6hd28,Uid:56542ad7-7f48-4051-ae36-d7536ab16d6e,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d8e20ff03e6b87cc110d5ef53e8031281a309ea47a8f40733ba643d34a2ee36b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:07:23.896445 kubelet[2619]: E0510 00:07:23.896353 2619 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d8e20ff03e6b87cc110d5ef53e8031281a309ea47a8f40733ba643d34a2ee36b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:07:23.896445 kubelet[2619]: E0510 00:07:23.896411 2619 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d8e20ff03e6b87cc110d5ef53e8031281a309ea47a8f40733ba643d34a2ee36b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-6hd28" May 10 00:07:23.896445 kubelet[2619]: E0510 00:07:23.896430 2619 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d8e20ff03e6b87cc110d5ef53e8031281a309ea47a8f40733ba643d34a2ee36b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-6hd28" May 10 00:07:23.896566 kubelet[2619]: E0510 00:07:23.896464 2619 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-6hd28_calico-system(56542ad7-7f48-4051-ae36-d7536ab16d6e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-6hd28_calico-system(56542ad7-7f48-4051-ae36-d7536ab16d6e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d8e20ff03e6b87cc110d5ef53e8031281a309ea47a8f40733ba643d34a2ee36b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-6hd28" podUID="56542ad7-7f48-4051-ae36-d7536ab16d6e" May 10 00:07:24.171943 containerd[1455]: time="2025-05-10T00:07:24.171688474Z" level=error msg="Failed to destroy network for sandbox \"f86a0e3b112b07a79058fb2b35d8a86ac28a6700bb9fd7b39024c80c34de9db5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:07:24.173465 containerd[1455]: time="2025-05-10T00:07:24.173096501Z" level=error msg="encountered an error cleaning up failed sandbox \"f86a0e3b112b07a79058fb2b35d8a86ac28a6700bb9fd7b39024c80c34de9db5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:07:24.173465 containerd[1455]: time="2025-05-10T00:07:24.173298825Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c5c466cb8-82f6f,Uid:01ce480b-3a6d-4fd5-af7a-73b802892ab1,Namespace:calico-apiserver,Attempt:1,} failed, error" error="failed to setup network for sandbox \"f86a0e3b112b07a79058fb2b35d8a86ac28a6700bb9fd7b39024c80c34de9db5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:07:24.174498 kubelet[2619]: E0510 00:07:24.173942 2619 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f86a0e3b112b07a79058fb2b35d8a86ac28a6700bb9fd7b39024c80c34de9db5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:07:24.174498 kubelet[2619]: E0510 00:07:24.174001 2619 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f86a0e3b112b07a79058fb2b35d8a86ac28a6700bb9fd7b39024c80c34de9db5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7c5c466cb8-82f6f" May 10 00:07:24.174498 kubelet[2619]: E0510 00:07:24.174021 2619 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f86a0e3b112b07a79058fb2b35d8a86ac28a6700bb9fd7b39024c80c34de9db5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7c5c466cb8-82f6f" May 10 00:07:24.174641 kubelet[2619]: E0510 00:07:24.174055 2619 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7c5c466cb8-82f6f_calico-apiserver(01ce480b-3a6d-4fd5-af7a-73b802892ab1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7c5c466cb8-82f6f_calico-apiserver(01ce480b-3a6d-4fd5-af7a-73b802892ab1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f86a0e3b112b07a79058fb2b35d8a86ac28a6700bb9fd7b39024c80c34de9db5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7c5c466cb8-82f6f" podUID="01ce480b-3a6d-4fd5-af7a-73b802892ab1" May 10 00:07:24.186308 containerd[1455]: time="2025-05-10T00:07:24.186150079Z" level=error msg="Failed to destroy network for sandbox \"e4ce826178bb2d8ffada47f1964be8acc17d8dddbf917d8a0037110976431a65\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:07:24.186695 containerd[1455]: time="2025-05-10T00:07:24.186665049Z" level=error msg="encountered an error cleaning up failed sandbox \"e4ce826178bb2d8ffada47f1964be8acc17d8dddbf917d8a0037110976431a65\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:07:24.187271 containerd[1455]: time="2025-05-10T00:07:24.187139139Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5c95969b9-5mpjw,Uid:2aab4214-d322-46c1-9e38-01e24fa563db,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"e4ce826178bb2d8ffada47f1964be8acc17d8dddbf917d8a0037110976431a65\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:07:24.187427 kubelet[2619]: E0510 00:07:24.187379 2619 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e4ce826178bb2d8ffada47f1964be8acc17d8dddbf917d8a0037110976431a65\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:07:24.187477 kubelet[2619]: E0510 00:07:24.187436 2619 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e4ce826178bb2d8ffada47f1964be8acc17d8dddbf917d8a0037110976431a65\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5c95969b9-5mpjw" May 10 00:07:24.187477 kubelet[2619]: E0510 00:07:24.187461 2619 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e4ce826178bb2d8ffada47f1964be8acc17d8dddbf917d8a0037110976431a65\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5c95969b9-5mpjw" May 10 00:07:24.187597 kubelet[2619]: E0510 00:07:24.187495 2619 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5c95969b9-5mpjw_calico-system(2aab4214-d322-46c1-9e38-01e24fa563db)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5c95969b9-5mpjw_calico-system(2aab4214-d322-46c1-9e38-01e24fa563db)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e4ce826178bb2d8ffada47f1964be8acc17d8dddbf917d8a0037110976431a65\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5c95969b9-5mpjw" podUID="2aab4214-d322-46c1-9e38-01e24fa563db" May 10 00:07:24.203485 containerd[1455]: time="2025-05-10T00:07:24.203436941Z" level=error msg="Failed to destroy network for sandbox \"b8915781d76c7e4bed54a6320a2ec6c1e2689e181c08dce871146a3ef92a58c5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:07:24.204718 containerd[1455]: time="2025-05-10T00:07:24.204645404Z" level=error msg="encountered an error cleaning up failed sandbox \"b8915781d76c7e4bed54a6320a2ec6c1e2689e181c08dce871146a3ef92a58c5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:07:24.205286 containerd[1455]: time="2025-05-10T00:07:24.205246416Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c5c466cb8-7rrbw,Uid:4adb856a-b358-4a75-afdb-0a2493e0d860,Namespace:calico-apiserver,Attempt:1,} failed, error" error="failed to setup network for sandbox \"b8915781d76c7e4bed54a6320a2ec6c1e2689e181c08dce871146a3ef92a58c5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:07:24.205873 kubelet[2619]: E0510 00:07:24.205633 2619 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b8915781d76c7e4bed54a6320a2ec6c1e2689e181c08dce871146a3ef92a58c5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:07:24.205873 kubelet[2619]: E0510 00:07:24.205722 2619 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b8915781d76c7e4bed54a6320a2ec6c1e2689e181c08dce871146a3ef92a58c5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7c5c466cb8-7rrbw" May 10 00:07:24.205873 kubelet[2619]: E0510 00:07:24.205739 2619 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b8915781d76c7e4bed54a6320a2ec6c1e2689e181c08dce871146a3ef92a58c5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7c5c466cb8-7rrbw" May 10 00:07:24.205992 kubelet[2619]: E0510 00:07:24.205774 2619 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7c5c466cb8-7rrbw_calico-apiserver(4adb856a-b358-4a75-afdb-0a2493e0d860)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7c5c466cb8-7rrbw_calico-apiserver(4adb856a-b358-4a75-afdb-0a2493e0d860)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b8915781d76c7e4bed54a6320a2ec6c1e2689e181c08dce871146a3ef92a58c5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7c5c466cb8-7rrbw" podUID="4adb856a-b358-4a75-afdb-0a2493e0d860" May 10 00:07:24.206346 containerd[1455]: time="2025-05-10T00:07:24.206248196Z" level=error msg="Failed to destroy network for sandbox \"25dbe61311c0d9faceee9b8b3cc0344ea5d46d787497e1c3a8891804cbfccca0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:07:24.206777 containerd[1455]: time="2025-05-10T00:07:24.206741206Z" level=error msg="encountered an error cleaning up failed sandbox \"25dbe61311c0d9faceee9b8b3cc0344ea5d46d787497e1c3a8891804cbfccca0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:07:24.206830 containerd[1455]: time="2025-05-10T00:07:24.206796527Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-srzmr,Uid:5487806e-6495-4d4f-a191-df1e4f5aa0a8,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"25dbe61311c0d9faceee9b8b3cc0344ea5d46d787497e1c3a8891804cbfccca0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:07:24.207779 kubelet[2619]: E0510 00:07:24.206981 2619 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"25dbe61311c0d9faceee9b8b3cc0344ea5d46d787497e1c3a8891804cbfccca0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:07:24.207779 kubelet[2619]: E0510 00:07:24.207029 2619 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"25dbe61311c0d9faceee9b8b3cc0344ea5d46d787497e1c3a8891804cbfccca0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-srzmr" May 10 00:07:24.207779 kubelet[2619]: E0510 00:07:24.207049 2619 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"25dbe61311c0d9faceee9b8b3cc0344ea5d46d787497e1c3a8891804cbfccca0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-srzmr" May 10 00:07:24.207965 kubelet[2619]: E0510 00:07:24.207081 2619 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-srzmr_kube-system(5487806e-6495-4d4f-a191-df1e4f5aa0a8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-srzmr_kube-system(5487806e-6495-4d4f-a191-df1e4f5aa0a8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"25dbe61311c0d9faceee9b8b3cc0344ea5d46d787497e1c3a8891804cbfccca0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-srzmr" podUID="5487806e-6495-4d4f-a191-df1e4f5aa0a8" May 10 00:07:24.216015 containerd[1455]: time="2025-05-10T00:07:24.215972028Z" level=error msg="Failed to destroy network for sandbox \"921c8599351f2d8869fc9e2febf2d37494d7807cf2a91d98402fc75b09f82487\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:07:24.216383 containerd[1455]: time="2025-05-10T00:07:24.216351836Z" level=error msg="encountered an error cleaning up failed sandbox \"921c8599351f2d8869fc9e2febf2d37494d7807cf2a91d98402fc75b09f82487\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:07:24.216466 containerd[1455]: time="2025-05-10T00:07:24.216417317Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-kflv6,Uid:c06d69b8-4f38-4476-a0a9-074ed47a6924,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"921c8599351f2d8869fc9e2febf2d37494d7807cf2a91d98402fc75b09f82487\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:07:24.216653 kubelet[2619]: E0510 00:07:24.216621 2619 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"921c8599351f2d8869fc9e2febf2d37494d7807cf2a91d98402fc75b09f82487\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:07:24.216697 kubelet[2619]: E0510 00:07:24.216672 2619 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"921c8599351f2d8869fc9e2febf2d37494d7807cf2a91d98402fc75b09f82487\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-kflv6" May 10 00:07:24.216725 kubelet[2619]: E0510 00:07:24.216697 2619 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"921c8599351f2d8869fc9e2febf2d37494d7807cf2a91d98402fc75b09f82487\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-kflv6" May 10 00:07:24.216762 kubelet[2619]: E0510 00:07:24.216736 2619 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-kflv6_kube-system(c06d69b8-4f38-4476-a0a9-074ed47a6924)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-kflv6_kube-system(c06d69b8-4f38-4476-a0a9-074ed47a6924)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"921c8599351f2d8869fc9e2febf2d37494d7807cf2a91d98402fc75b09f82487\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-kflv6" podUID="c06d69b8-4f38-4476-a0a9-074ed47a6924" May 10 00:07:24.691823 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d8e20ff03e6b87cc110d5ef53e8031281a309ea47a8f40733ba643d34a2ee36b-shm.mount: Deactivated successfully. May 10 00:07:24.691920 systemd[1]: run-netns-cni\x2de3491cfe\x2d8f6f\x2d2b30\x2dcddd\x2d662a32a69752.mount: Deactivated successfully. May 10 00:07:24.691969 systemd[1]: run-netns-cni\x2d8a666066\x2d8e6f\x2d9953\x2d0e33\x2ddc40b9b6d370.mount: Deactivated successfully. May 10 00:07:24.692015 systemd[1]: run-netns-cni\x2de0bfb9a3\x2dc0ee\x2d333f\x2de4ef\x2d32ebd1715761.mount: Deactivated successfully. May 10 00:07:24.867488 kubelet[2619]: I0510 00:07:24.867449 2619 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e4ce826178bb2d8ffada47f1964be8acc17d8dddbf917d8a0037110976431a65" May 10 00:07:24.868937 containerd[1455]: time="2025-05-10T00:07:24.868030985Z" level=info msg="StopPodSandbox for \"e4ce826178bb2d8ffada47f1964be8acc17d8dddbf917d8a0037110976431a65\"" May 10 00:07:24.871727 containerd[1455]: time="2025-05-10T00:07:24.869509414Z" level=info msg="Ensure that sandbox e4ce826178bb2d8ffada47f1964be8acc17d8dddbf917d8a0037110976431a65 in task-service has been cleanup successfully" May 10 00:07:24.872061 kubelet[2619]: I0510 00:07:24.871867 2619 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d8e20ff03e6b87cc110d5ef53e8031281a309ea47a8f40733ba643d34a2ee36b" May 10 00:07:24.872015 systemd[1]: run-netns-cni\x2d176a8494\x2deb63\x2d30b2\x2dd502\x2d6b8a45575002.mount: Deactivated successfully. May 10 00:07:24.872503 containerd[1455]: time="2025-05-10T00:07:24.871875541Z" level=info msg="TearDown network for sandbox \"e4ce826178bb2d8ffada47f1964be8acc17d8dddbf917d8a0037110976431a65\" successfully" May 10 00:07:24.872503 containerd[1455]: time="2025-05-10T00:07:24.871901101Z" level=info msg="StopPodSandbox for \"e4ce826178bb2d8ffada47f1964be8acc17d8dddbf917d8a0037110976431a65\" returns successfully" May 10 00:07:24.872606 containerd[1455]: time="2025-05-10T00:07:24.872520354Z" level=info msg="StopPodSandbox for \"91b203a3ea9387e43e18d103f3481a84640baa460cfb6d17599a8f65d8a29670\"" May 10 00:07:24.873326 containerd[1455]: time="2025-05-10T00:07:24.872794919Z" level=info msg="StopPodSandbox for \"d8e20ff03e6b87cc110d5ef53e8031281a309ea47a8f40733ba643d34a2ee36b\"" May 10 00:07:24.873326 containerd[1455]: time="2025-05-10T00:07:24.873262648Z" level=info msg="TearDown network for sandbox \"91b203a3ea9387e43e18d103f3481a84640baa460cfb6d17599a8f65d8a29670\" successfully" May 10 00:07:24.873326 containerd[1455]: time="2025-05-10T00:07:24.873280569Z" level=info msg="StopPodSandbox for \"91b203a3ea9387e43e18d103f3481a84640baa460cfb6d17599a8f65d8a29670\" returns successfully" May 10 00:07:24.873752 containerd[1455]: time="2025-05-10T00:07:24.873540694Z" level=info msg="Ensure that sandbox d8e20ff03e6b87cc110d5ef53e8031281a309ea47a8f40733ba643d34a2ee36b in task-service has been cleanup successfully" May 10 00:07:24.875021 kubelet[2619]: I0510 00:07:24.874984 2619 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b8915781d76c7e4bed54a6320a2ec6c1e2689e181c08dce871146a3ef92a58c5" May 10 00:07:24.875189 containerd[1455]: time="2025-05-10T00:07:24.875160806Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5c95969b9-5mpjw,Uid:2aab4214-d322-46c1-9e38-01e24fa563db,Namespace:calico-system,Attempt:2,}" May 10 00:07:24.875389 containerd[1455]: time="2025-05-10T00:07:24.875357010Z" level=info msg="StopPodSandbox for \"b8915781d76c7e4bed54a6320a2ec6c1e2689e181c08dce871146a3ef92a58c5\"" May 10 00:07:24.875518 containerd[1455]: time="2025-05-10T00:07:24.875494932Z" level=info msg="Ensure that sandbox b8915781d76c7e4bed54a6320a2ec6c1e2689e181c08dce871146a3ef92a58c5 in task-service has been cleanup successfully" May 10 00:07:24.875771 containerd[1455]: time="2025-05-10T00:07:24.875746777Z" level=info msg="TearDown network for sandbox \"b8915781d76c7e4bed54a6320a2ec6c1e2689e181c08dce871146a3ef92a58c5\" successfully" May 10 00:07:24.875771 containerd[1455]: time="2025-05-10T00:07:24.875767498Z" level=info msg="StopPodSandbox for \"b8915781d76c7e4bed54a6320a2ec6c1e2689e181c08dce871146a3ef92a58c5\" returns successfully" May 10 00:07:24.876143 containerd[1455]: time="2025-05-10T00:07:24.876122545Z" level=info msg="StopPodSandbox for \"bbdc02b92325d3eaa887dc6c4a0b35702fe8ec0e0c9d4ff1a6cb0c619f3dbad6\"" May 10 00:07:24.876214 containerd[1455]: time="2025-05-10T00:07:24.876196426Z" level=info msg="TearDown network for sandbox \"bbdc02b92325d3eaa887dc6c4a0b35702fe8ec0e0c9d4ff1a6cb0c619f3dbad6\" successfully" May 10 00:07:24.876214 containerd[1455]: time="2025-05-10T00:07:24.876210066Z" level=info msg="StopPodSandbox for \"bbdc02b92325d3eaa887dc6c4a0b35702fe8ec0e0c9d4ff1a6cb0c619f3dbad6\" returns successfully" May 10 00:07:24.876696 containerd[1455]: time="2025-05-10T00:07:24.876629475Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c5c466cb8-7rrbw,Uid:4adb856a-b358-4a75-afdb-0a2493e0d860,Namespace:calico-apiserver,Attempt:2,}" May 10 00:07:24.878158 systemd[1]: run-netns-cni\x2da70ce769\x2da4bd\x2defbb\x2d6efa\x2d1fa48735d100.mount: Deactivated successfully. May 10 00:07:24.879773 kubelet[2619]: I0510 00:07:24.879349 2619 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f86a0e3b112b07a79058fb2b35d8a86ac28a6700bb9fd7b39024c80c34de9db5" May 10 00:07:24.880780 containerd[1455]: time="2025-05-10T00:07:24.880630874Z" level=info msg="StopPodSandbox for \"f86a0e3b112b07a79058fb2b35d8a86ac28a6700bb9fd7b39024c80c34de9db5\"" May 10 00:07:24.881925 kubelet[2619]: I0510 00:07:24.881907 2619 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="25dbe61311c0d9faceee9b8b3cc0344ea5d46d787497e1c3a8891804cbfccca0" May 10 00:07:24.882449 containerd[1455]: time="2025-05-10T00:07:24.882416949Z" level=info msg="StopPodSandbox for \"25dbe61311c0d9faceee9b8b3cc0344ea5d46d787497e1c3a8891804cbfccca0\"" May 10 00:07:24.882573 containerd[1455]: time="2025-05-10T00:07:24.882551152Z" level=info msg="Ensure that sandbox 25dbe61311c0d9faceee9b8b3cc0344ea5d46d787497e1c3a8891804cbfccca0 in task-service has been cleanup successfully" May 10 00:07:24.882693 containerd[1455]: time="2025-05-10T00:07:24.882610833Z" level=info msg="Ensure that sandbox f86a0e3b112b07a79058fb2b35d8a86ac28a6700bb9fd7b39024c80c34de9db5 in task-service has been cleanup successfully" May 10 00:07:24.882737 containerd[1455]: time="2025-05-10T00:07:24.882718075Z" level=info msg="TearDown network for sandbox \"25dbe61311c0d9faceee9b8b3cc0344ea5d46d787497e1c3a8891804cbfccca0\" successfully" May 10 00:07:24.882774 containerd[1455]: time="2025-05-10T00:07:24.882758916Z" level=info msg="StopPodSandbox for \"25dbe61311c0d9faceee9b8b3cc0344ea5d46d787497e1c3a8891804cbfccca0\" returns successfully" May 10 00:07:24.883217 containerd[1455]: time="2025-05-10T00:07:24.883047361Z" level=info msg="TearDown network for sandbox \"f86a0e3b112b07a79058fb2b35d8a86ac28a6700bb9fd7b39024c80c34de9db5\" successfully" May 10 00:07:24.883217 containerd[1455]: time="2025-05-10T00:07:24.883076042Z" level=info msg="StopPodSandbox for \"f86a0e3b112b07a79058fb2b35d8a86ac28a6700bb9fd7b39024c80c34de9db5\" returns successfully" May 10 00:07:24.883374 containerd[1455]: time="2025-05-10T00:07:24.883328767Z" level=info msg="StopPodSandbox for \"c19924cbf291cfedb2344018f82e64d64b01b1d95b185c4d2455bf23ec598274\"" May 10 00:07:24.883374 containerd[1455]: time="2025-05-10T00:07:24.883410649Z" level=info msg="TearDown network for sandbox \"d8e20ff03e6b87cc110d5ef53e8031281a309ea47a8f40733ba643d34a2ee36b\" successfully" May 10 00:07:24.883374 containerd[1455]: time="2025-05-10T00:07:24.883431569Z" level=info msg="StopPodSandbox for \"d8e20ff03e6b87cc110d5ef53e8031281a309ea47a8f40733ba643d34a2ee36b\" returns successfully" May 10 00:07:24.883374 containerd[1455]: time="2025-05-10T00:07:24.883467770Z" level=info msg="TearDown network for sandbox \"c19924cbf291cfedb2344018f82e64d64b01b1d95b185c4d2455bf23ec598274\" successfully" May 10 00:07:24.883374 containerd[1455]: time="2025-05-10T00:07:24.883481330Z" level=info msg="StopPodSandbox for \"c19924cbf291cfedb2344018f82e64d64b01b1d95b185c4d2455bf23ec598274\" returns successfully" May 10 00:07:24.884464 containerd[1455]: time="2025-05-10T00:07:24.884137543Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6hd28,Uid:56542ad7-7f48-4051-ae36-d7536ab16d6e,Namespace:calico-system,Attempt:1,}" May 10 00:07:24.884684 kubelet[2619]: I0510 00:07:24.884582 2619 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="921c8599351f2d8869fc9e2febf2d37494d7807cf2a91d98402fc75b09f82487" May 10 00:07:24.885507 containerd[1455]: time="2025-05-10T00:07:24.885459569Z" level=info msg="StopPodSandbox for \"921c8599351f2d8869fc9e2febf2d37494d7807cf2a91d98402fc75b09f82487\"" May 10 00:07:24.886420 containerd[1455]: time="2025-05-10T00:07:24.886388467Z" level=info msg="Ensure that sandbox 921c8599351f2d8869fc9e2febf2d37494d7807cf2a91d98402fc75b09f82487 in task-service has been cleanup successfully" May 10 00:07:24.886982 kubelet[2619]: E0510 00:07:24.886965 2619 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:07:24.887448 containerd[1455]: time="2025-05-10T00:07:24.887407727Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-srzmr,Uid:5487806e-6495-4d4f-a191-df1e4f5aa0a8,Namespace:kube-system,Attempt:2,}" May 10 00:07:24.887872 systemd[1]: run-netns-cni\x2dda686457\x2d3c8d\x2d7620\x2df824\x2dab10fe9e9174.mount: Deactivated successfully. May 10 00:07:24.887958 systemd[1]: run-netns-cni\x2d1530f3c9\x2dd70d\x2d0a53\x2de3fa\x2da7e03a6882b9.mount: Deactivated successfully. May 10 00:07:24.888006 systemd[1]: run-netns-cni\x2d713ebfe1\x2de0a1\x2d4e16\x2d9fb1\x2d0811651166c1.mount: Deactivated successfully. May 10 00:07:24.889072 containerd[1455]: time="2025-05-10T00:07:24.889042960Z" level=info msg="TearDown network for sandbox \"921c8599351f2d8869fc9e2febf2d37494d7807cf2a91d98402fc75b09f82487\" successfully" May 10 00:07:24.889072 containerd[1455]: time="2025-05-10T00:07:24.889071480Z" level=info msg="StopPodSandbox for \"921c8599351f2d8869fc9e2febf2d37494d7807cf2a91d98402fc75b09f82487\" returns successfully" May 10 00:07:24.890716 systemd[1]: run-netns-cni\x2da5279efb\x2defd0\x2df08f\x2dcab9\x2d7351e4d4d021.mount: Deactivated successfully. May 10 00:07:24.895271 containerd[1455]: time="2025-05-10T00:07:24.894686551Z" level=info msg="StopPodSandbox for \"d17345d95100bf841394fddf6cec94d44cb83645858af93608efd0d4907222a4\"" May 10 00:07:24.895271 containerd[1455]: time="2025-05-10T00:07:24.894792833Z" level=info msg="TearDown network for sandbox \"d17345d95100bf841394fddf6cec94d44cb83645858af93608efd0d4907222a4\" successfully" May 10 00:07:24.895271 containerd[1455]: time="2025-05-10T00:07:24.894804474Z" level=info msg="StopPodSandbox for \"d17345d95100bf841394fddf6cec94d44cb83645858af93608efd0d4907222a4\" returns successfully" May 10 00:07:24.895405 kubelet[2619]: E0510 00:07:24.895044 2619 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:07:24.895439 containerd[1455]: time="2025-05-10T00:07:24.895357284Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-kflv6,Uid:c06d69b8-4f38-4476-a0a9-074ed47a6924,Namespace:kube-system,Attempt:2,}" May 10 00:07:24.998576 containerd[1455]: time="2025-05-10T00:07:24.998453360Z" level=info msg="StopPodSandbox for \"dec5dfa24dcd88a2cc1d3fb9acb23f7fa1da4cfaf9e23a890db6b276c61ff526\"" May 10 00:07:24.998687 containerd[1455]: time="2025-05-10T00:07:24.998580803Z" level=info msg="TearDown network for sandbox \"dec5dfa24dcd88a2cc1d3fb9acb23f7fa1da4cfaf9e23a890db6b276c61ff526\" successfully" May 10 00:07:24.998687 containerd[1455]: time="2025-05-10T00:07:24.998592363Z" level=info msg="StopPodSandbox for \"dec5dfa24dcd88a2cc1d3fb9acb23f7fa1da4cfaf9e23a890db6b276c61ff526\" returns successfully" May 10 00:07:24.999464 containerd[1455]: time="2025-05-10T00:07:24.999439340Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c5c466cb8-82f6f,Uid:01ce480b-3a6d-4fd5-af7a-73b802892ab1,Namespace:calico-apiserver,Attempt:2,}" May 10 00:07:25.070654 containerd[1455]: time="2025-05-10T00:07:25.070604779Z" level=error msg="Failed to destroy network for sandbox \"8375f65da33d850ff54f70b94cd7018dd499e5b3c4ef8c1d854bbe92b59ccdf4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:07:25.071298 containerd[1455]: time="2025-05-10T00:07:25.071251351Z" level=error msg="encountered an error cleaning up failed sandbox \"8375f65da33d850ff54f70b94cd7018dd499e5b3c4ef8c1d854bbe92b59ccdf4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:07:25.071350 containerd[1455]: time="2025-05-10T00:07:25.071321073Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5c95969b9-5mpjw,Uid:2aab4214-d322-46c1-9e38-01e24fa563db,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"8375f65da33d850ff54f70b94cd7018dd499e5b3c4ef8c1d854bbe92b59ccdf4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:07:25.071934 kubelet[2619]: E0510 00:07:25.071551 2619 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8375f65da33d850ff54f70b94cd7018dd499e5b3c4ef8c1d854bbe92b59ccdf4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:07:25.071934 kubelet[2619]: E0510 00:07:25.071619 2619 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8375f65da33d850ff54f70b94cd7018dd499e5b3c4ef8c1d854bbe92b59ccdf4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5c95969b9-5mpjw" May 10 00:07:25.071934 kubelet[2619]: E0510 00:07:25.071638 2619 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8375f65da33d850ff54f70b94cd7018dd499e5b3c4ef8c1d854bbe92b59ccdf4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5c95969b9-5mpjw" May 10 00:07:25.072103 kubelet[2619]: E0510 00:07:25.071677 2619 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5c95969b9-5mpjw_calico-system(2aab4214-d322-46c1-9e38-01e24fa563db)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5c95969b9-5mpjw_calico-system(2aab4214-d322-46c1-9e38-01e24fa563db)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8375f65da33d850ff54f70b94cd7018dd499e5b3c4ef8c1d854bbe92b59ccdf4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5c95969b9-5mpjw" podUID="2aab4214-d322-46c1-9e38-01e24fa563db" May 10 00:07:25.192048 containerd[1455]: time="2025-05-10T00:07:25.191906974Z" level=error msg="Failed to destroy network for sandbox \"26134fe4490181fb0fe5e82b431ffdd239e1bc587066208e5e65c9b1b6e1d435\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:07:25.192458 containerd[1455]: time="2025-05-10T00:07:25.192428664Z" level=error msg="encountered an error cleaning up failed sandbox \"26134fe4490181fb0fe5e82b431ffdd239e1bc587066208e5e65c9b1b6e1d435\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:07:25.192692 containerd[1455]: time="2025-05-10T00:07:25.192616308Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-kflv6,Uid:c06d69b8-4f38-4476-a0a9-074ed47a6924,Namespace:kube-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"26134fe4490181fb0fe5e82b431ffdd239e1bc587066208e5e65c9b1b6e1d435\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:07:25.193211 kubelet[2619]: E0510 00:07:25.192920 2619 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"26134fe4490181fb0fe5e82b431ffdd239e1bc587066208e5e65c9b1b6e1d435\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:07:25.193211 kubelet[2619]: E0510 00:07:25.193100 2619 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"26134fe4490181fb0fe5e82b431ffdd239e1bc587066208e5e65c9b1b6e1d435\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-kflv6" May 10 00:07:25.193211 kubelet[2619]: E0510 00:07:25.193120 2619 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"26134fe4490181fb0fe5e82b431ffdd239e1bc587066208e5e65c9b1b6e1d435\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-kflv6" May 10 00:07:25.193346 kubelet[2619]: E0510 00:07:25.193165 2619 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-kflv6_kube-system(c06d69b8-4f38-4476-a0a9-074ed47a6924)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-kflv6_kube-system(c06d69b8-4f38-4476-a0a9-074ed47a6924)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"26134fe4490181fb0fe5e82b431ffdd239e1bc587066208e5e65c9b1b6e1d435\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-kflv6" podUID="c06d69b8-4f38-4476-a0a9-074ed47a6924" May 10 00:07:25.197742 containerd[1455]: time="2025-05-10T00:07:25.197585923Z" level=error msg="Failed to destroy network for sandbox \"a6b674e652fda6143482986bbb98cab22b4708195d47f2f09d4d689271b239d8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:07:25.198093 containerd[1455]: time="2025-05-10T00:07:25.198066212Z" level=error msg="encountered an error cleaning up failed sandbox \"a6b674e652fda6143482986bbb98cab22b4708195d47f2f09d4d689271b239d8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:07:25.198165 containerd[1455]: time="2025-05-10T00:07:25.198142253Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6hd28,Uid:56542ad7-7f48-4051-ae36-d7536ab16d6e,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"a6b674e652fda6143482986bbb98cab22b4708195d47f2f09d4d689271b239d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:07:25.198610 kubelet[2619]: E0510 00:07:25.198578 2619 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a6b674e652fda6143482986bbb98cab22b4708195d47f2f09d4d689271b239d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:07:25.198666 kubelet[2619]: E0510 00:07:25.198628 2619 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a6b674e652fda6143482986bbb98cab22b4708195d47f2f09d4d689271b239d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-6hd28" May 10 00:07:25.198666 kubelet[2619]: E0510 00:07:25.198651 2619 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a6b674e652fda6143482986bbb98cab22b4708195d47f2f09d4d689271b239d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-6hd28" May 10 00:07:25.198732 kubelet[2619]: E0510 00:07:25.198690 2619 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-6hd28_calico-system(56542ad7-7f48-4051-ae36-d7536ab16d6e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-6hd28_calico-system(56542ad7-7f48-4051-ae36-d7536ab16d6e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a6b674e652fda6143482986bbb98cab22b4708195d47f2f09d4d689271b239d8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-6hd28" podUID="56542ad7-7f48-4051-ae36-d7536ab16d6e" May 10 00:07:25.203830 containerd[1455]: time="2025-05-10T00:07:25.203790601Z" level=error msg="Failed to destroy network for sandbox \"adac49ec9355a7516d3e3e03c3d90bf95280ea4cc39e496c40aa1f106fe9b11b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:07:25.204172 containerd[1455]: time="2025-05-10T00:07:25.204134008Z" level=error msg="encountered an error cleaning up failed sandbox \"adac49ec9355a7516d3e3e03c3d90bf95280ea4cc39e496c40aa1f106fe9b11b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:07:25.204216 containerd[1455]: time="2025-05-10T00:07:25.204198089Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-srzmr,Uid:5487806e-6495-4d4f-a191-df1e4f5aa0a8,Namespace:kube-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"adac49ec9355a7516d3e3e03c3d90bf95280ea4cc39e496c40aa1f106fe9b11b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:07:25.204421 kubelet[2619]: E0510 00:07:25.204391 2619 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"adac49ec9355a7516d3e3e03c3d90bf95280ea4cc39e496c40aa1f106fe9b11b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:07:25.204473 kubelet[2619]: E0510 00:07:25.204438 2619 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"adac49ec9355a7516d3e3e03c3d90bf95280ea4cc39e496c40aa1f106fe9b11b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-srzmr" May 10 00:07:25.204473 kubelet[2619]: E0510 00:07:25.204456 2619 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"adac49ec9355a7516d3e3e03c3d90bf95280ea4cc39e496c40aa1f106fe9b11b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-srzmr" May 10 00:07:25.204530 kubelet[2619]: E0510 00:07:25.204493 2619 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-srzmr_kube-system(5487806e-6495-4d4f-a191-df1e4f5aa0a8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-srzmr_kube-system(5487806e-6495-4d4f-a191-df1e4f5aa0a8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"adac49ec9355a7516d3e3e03c3d90bf95280ea4cc39e496c40aa1f106fe9b11b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-srzmr" podUID="5487806e-6495-4d4f-a191-df1e4f5aa0a8" May 10 00:07:25.211758 containerd[1455]: time="2025-05-10T00:07:25.211712312Z" level=error msg="Failed to destroy network for sandbox \"c7f4833db2793d7a6f2d5af34885807a8151ca7d4eac5a79d35a0b094c420867\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:07:25.212041 containerd[1455]: time="2025-05-10T00:07:25.212015198Z" level=error msg="encountered an error cleaning up failed sandbox \"c7f4833db2793d7a6f2d5af34885807a8151ca7d4eac5a79d35a0b094c420867\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:07:25.212133 containerd[1455]: time="2025-05-10T00:07:25.212070879Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c5c466cb8-7rrbw,Uid:4adb856a-b358-4a75-afdb-0a2493e0d860,Namespace:calico-apiserver,Attempt:2,} failed, error" error="failed to setup network for sandbox \"c7f4833db2793d7a6f2d5af34885807a8151ca7d4eac5a79d35a0b094c420867\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:07:25.212285 kubelet[2619]: E0510 00:07:25.212239 2619 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c7f4833db2793d7a6f2d5af34885807a8151ca7d4eac5a79d35a0b094c420867\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:07:25.212326 kubelet[2619]: E0510 00:07:25.212302 2619 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c7f4833db2793d7a6f2d5af34885807a8151ca7d4eac5a79d35a0b094c420867\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7c5c466cb8-7rrbw" May 10 00:07:25.212326 kubelet[2619]: E0510 00:07:25.212321 2619 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c7f4833db2793d7a6f2d5af34885807a8151ca7d4eac5a79d35a0b094c420867\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7c5c466cb8-7rrbw" May 10 00:07:25.212382 kubelet[2619]: E0510 00:07:25.212361 2619 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7c5c466cb8-7rrbw_calico-apiserver(4adb856a-b358-4a75-afdb-0a2493e0d860)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7c5c466cb8-7rrbw_calico-apiserver(4adb856a-b358-4a75-afdb-0a2493e0d860)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c7f4833db2793d7a6f2d5af34885807a8151ca7d4eac5a79d35a0b094c420867\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7c5c466cb8-7rrbw" podUID="4adb856a-b358-4a75-afdb-0a2493e0d860" May 10 00:07:25.224207 containerd[1455]: time="2025-05-10T00:07:25.224086469Z" level=error msg="Failed to destroy network for sandbox \"9b9b5a784923698865b82f134ee4f04dece9c7e4b3a822a09639506a16350baf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:07:25.225013 containerd[1455]: time="2025-05-10T00:07:25.224983846Z" level=error msg="encountered an error cleaning up failed sandbox \"9b9b5a784923698865b82f134ee4f04dece9c7e4b3a822a09639506a16350baf\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:07:25.225356 containerd[1455]: time="2025-05-10T00:07:25.225110808Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c5c466cb8-82f6f,Uid:01ce480b-3a6d-4fd5-af7a-73b802892ab1,Namespace:calico-apiserver,Attempt:2,} failed, error" error="failed to setup network for sandbox \"9b9b5a784923698865b82f134ee4f04dece9c7e4b3a822a09639506a16350baf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:07:25.225446 kubelet[2619]: E0510 00:07:25.225320 2619 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9b9b5a784923698865b82f134ee4f04dece9c7e4b3a822a09639506a16350baf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:07:25.225446 kubelet[2619]: E0510 00:07:25.225372 2619 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9b9b5a784923698865b82f134ee4f04dece9c7e4b3a822a09639506a16350baf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7c5c466cb8-82f6f" May 10 00:07:25.225446 kubelet[2619]: E0510 00:07:25.225393 2619 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9b9b5a784923698865b82f134ee4f04dece9c7e4b3a822a09639506a16350baf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7c5c466cb8-82f6f" May 10 00:07:25.225566 kubelet[2619]: E0510 00:07:25.225436 2619 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7c5c466cb8-82f6f_calico-apiserver(01ce480b-3a6d-4fd5-af7a-73b802892ab1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7c5c466cb8-82f6f_calico-apiserver(01ce480b-3a6d-4fd5-af7a-73b802892ab1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9b9b5a784923698865b82f134ee4f04dece9c7e4b3a822a09639506a16350baf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7c5c466cb8-82f6f" podUID="01ce480b-3a6d-4fd5-af7a-73b802892ab1" May 10 00:07:25.889391 kubelet[2619]: I0510 00:07:25.888105 2619 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="adac49ec9355a7516d3e3e03c3d90bf95280ea4cc39e496c40aa1f106fe9b11b" May 10 00:07:25.889860 containerd[1455]: time="2025-05-10T00:07:25.889090281Z" level=info msg="StopPodSandbox for \"adac49ec9355a7516d3e3e03c3d90bf95280ea4cc39e496c40aa1f106fe9b11b\"" May 10 00:07:25.889860 containerd[1455]: time="2025-05-10T00:07:25.889351726Z" level=info msg="Ensure that sandbox adac49ec9355a7516d3e3e03c3d90bf95280ea4cc39e496c40aa1f106fe9b11b in task-service has been cleanup successfully" May 10 00:07:25.892957 kubelet[2619]: I0510 00:07:25.890706 2619 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="26134fe4490181fb0fe5e82b431ffdd239e1bc587066208e5e65c9b1b6e1d435" May 10 00:07:25.891804 systemd[1]: run-netns-cni\x2dc595404b\x2dc786\x2d2abe\x2d41a1\x2d0348fe3d4e0a.mount: Deactivated successfully. May 10 00:07:25.893821 containerd[1455]: time="2025-05-10T00:07:25.891291243Z" level=info msg="StopPodSandbox for \"26134fe4490181fb0fe5e82b431ffdd239e1bc587066208e5e65c9b1b6e1d435\"" May 10 00:07:25.893821 containerd[1455]: time="2025-05-10T00:07:25.892178620Z" level=info msg="Ensure that sandbox 26134fe4490181fb0fe5e82b431ffdd239e1bc587066208e5e65c9b1b6e1d435 in task-service has been cleanup successfully" May 10 00:07:25.901002 containerd[1455]: time="2025-05-10T00:07:25.893628128Z" level=info msg="TearDown network for sandbox \"adac49ec9355a7516d3e3e03c3d90bf95280ea4cc39e496c40aa1f106fe9b11b\" successfully" May 10 00:07:25.901002 containerd[1455]: time="2025-05-10T00:07:25.896601865Z" level=info msg="StopPodSandbox for \"adac49ec9355a7516d3e3e03c3d90bf95280ea4cc39e496c40aa1f106fe9b11b\" returns successfully" May 10 00:07:25.901002 containerd[1455]: time="2025-05-10T00:07:25.896175497Z" level=info msg="TearDown network for sandbox \"26134fe4490181fb0fe5e82b431ffdd239e1bc587066208e5e65c9b1b6e1d435\" successfully" May 10 00:07:25.901002 containerd[1455]: time="2025-05-10T00:07:25.896933991Z" level=info msg="StopPodSandbox for \"26134fe4490181fb0fe5e82b431ffdd239e1bc587066208e5e65c9b1b6e1d435\" returns successfully" May 10 00:07:25.901002 containerd[1455]: time="2025-05-10T00:07:25.898285937Z" level=info msg="StopPodSandbox for \"921c8599351f2d8869fc9e2febf2d37494d7807cf2a91d98402fc75b09f82487\"" May 10 00:07:25.901002 containerd[1455]: time="2025-05-10T00:07:25.898346698Z" level=info msg="StopPodSandbox for \"8375f65da33d850ff54f70b94cd7018dd499e5b3c4ef8c1d854bbe92b59ccdf4\"" May 10 00:07:25.901002 containerd[1455]: time="2025-05-10T00:07:25.898720545Z" level=info msg="Ensure that sandbox 8375f65da33d850ff54f70b94cd7018dd499e5b3c4ef8c1d854bbe92b59ccdf4 in task-service has been cleanup successfully" May 10 00:07:25.901002 containerd[1455]: time="2025-05-10T00:07:25.898368818Z" level=info msg="StopPodSandbox for \"25dbe61311c0d9faceee9b8b3cc0344ea5d46d787497e1c3a8891804cbfccca0\"" May 10 00:07:25.901002 containerd[1455]: time="2025-05-10T00:07:25.898926069Z" level=info msg="TearDown network for sandbox \"25dbe61311c0d9faceee9b8b3cc0344ea5d46d787497e1c3a8891804cbfccca0\" successfully" May 10 00:07:25.901002 containerd[1455]: time="2025-05-10T00:07:25.898941629Z" level=info msg="StopPodSandbox for \"25dbe61311c0d9faceee9b8b3cc0344ea5d46d787497e1c3a8891804cbfccca0\" returns successfully" May 10 00:07:25.901002 containerd[1455]: time="2025-05-10T00:07:25.899296516Z" level=info msg="TearDown network for sandbox \"921c8599351f2d8869fc9e2febf2d37494d7807cf2a91d98402fc75b09f82487\" successfully" May 10 00:07:25.901002 containerd[1455]: time="2025-05-10T00:07:25.899416278Z" level=info msg="StopPodSandbox for \"921c8599351f2d8869fc9e2febf2d37494d7807cf2a91d98402fc75b09f82487\" returns successfully" May 10 00:07:25.901002 containerd[1455]: time="2025-05-10T00:07:25.900412297Z" level=info msg="TearDown network for sandbox \"8375f65da33d850ff54f70b94cd7018dd499e5b3c4ef8c1d854bbe92b59ccdf4\" successfully" May 10 00:07:25.901002 containerd[1455]: time="2025-05-10T00:07:25.900434978Z" level=info msg="StopPodSandbox for \"8375f65da33d850ff54f70b94cd7018dd499e5b3c4ef8c1d854bbe92b59ccdf4\" returns successfully" May 10 00:07:25.901002 containerd[1455]: time="2025-05-10T00:07:25.900494659Z" level=info msg="StopPodSandbox for \"d17345d95100bf841394fddf6cec94d44cb83645858af93608efd0d4907222a4\"" May 10 00:07:25.898410 systemd[1]: run-netns-cni\x2d84eb288f\x2dc1b4\x2ddf84\x2dc1be\x2dbcd64bd6bab9.mount: Deactivated successfully. May 10 00:07:25.901486 kubelet[2619]: I0510 00:07:25.897391 2619 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8375f65da33d850ff54f70b94cd7018dd499e5b3c4ef8c1d854bbe92b59ccdf4" May 10 00:07:25.901528 containerd[1455]: time="2025-05-10T00:07:25.901126911Z" level=info msg="TearDown network for sandbox \"d17345d95100bf841394fddf6cec94d44cb83645858af93608efd0d4907222a4\" successfully" May 10 00:07:25.901528 containerd[1455]: time="2025-05-10T00:07:25.900613821Z" level=info msg="StopPodSandbox for \"c19924cbf291cfedb2344018f82e64d64b01b1d95b185c4d2455bf23ec598274\"" May 10 00:07:25.902397 containerd[1455]: time="2025-05-10T00:07:25.901599680Z" level=info msg="TearDown network for sandbox \"c19924cbf291cfedb2344018f82e64d64b01b1d95b185c4d2455bf23ec598274\" successfully" May 10 00:07:25.902397 containerd[1455]: time="2025-05-10T00:07:25.901625721Z" level=info msg="StopPodSandbox for \"c19924cbf291cfedb2344018f82e64d64b01b1d95b185c4d2455bf23ec598274\" returns successfully" May 10 00:07:25.902397 containerd[1455]: time="2025-05-10T00:07:25.901677562Z" level=info msg="StopPodSandbox for \"e4ce826178bb2d8ffada47f1964be8acc17d8dddbf917d8a0037110976431a65\"" May 10 00:07:25.902397 containerd[1455]: time="2025-05-10T00:07:25.901778683Z" level=info msg="TearDown network for sandbox \"e4ce826178bb2d8ffada47f1964be8acc17d8dddbf917d8a0037110976431a65\" successfully" May 10 00:07:25.902397 containerd[1455]: time="2025-05-10T00:07:25.901193792Z" level=info msg="StopPodSandbox for \"d17345d95100bf841394fddf6cec94d44cb83645858af93608efd0d4907222a4\" returns successfully" May 10 00:07:25.902397 containerd[1455]: time="2025-05-10T00:07:25.901811764Z" level=info msg="StopPodSandbox for \"e4ce826178bb2d8ffada47f1964be8acc17d8dddbf917d8a0037110976431a65\" returns successfully" May 10 00:07:25.902596 kubelet[2619]: E0510 00:07:25.902090 2619 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:07:25.902596 kubelet[2619]: E0510 00:07:25.902295 2619 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:07:25.902654 containerd[1455]: time="2025-05-10T00:07:25.902626780Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-kflv6,Uid:c06d69b8-4f38-4476-a0a9-074ed47a6924,Namespace:kube-system,Attempt:3,}" May 10 00:07:25.904209 containerd[1455]: time="2025-05-10T00:07:25.903016147Z" level=info msg="StopPodSandbox for \"91b203a3ea9387e43e18d103f3481a84640baa460cfb6d17599a8f65d8a29670\"" May 10 00:07:25.904209 containerd[1455]: time="2025-05-10T00:07:25.903139109Z" level=info msg="TearDown network for sandbox \"91b203a3ea9387e43e18d103f3481a84640baa460cfb6d17599a8f65d8a29670\" successfully" May 10 00:07:25.904209 containerd[1455]: time="2025-05-10T00:07:25.903152870Z" level=info msg="StopPodSandbox for \"91b203a3ea9387e43e18d103f3481a84640baa460cfb6d17599a8f65d8a29670\" returns successfully" May 10 00:07:25.903994 systemd[1]: run-netns-cni\x2d0ba54484\x2dbbac\x2dfc0a\x2d4344\x2d8da2c97589e4.mount: Deactivated successfully. May 10 00:07:25.905682 kubelet[2619]: I0510 00:07:25.904504 2619 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a6b674e652fda6143482986bbb98cab22b4708195d47f2f09d4d689271b239d8" May 10 00:07:25.905727 containerd[1455]: time="2025-05-10T00:07:25.904556376Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-srzmr,Uid:5487806e-6495-4d4f-a191-df1e4f5aa0a8,Namespace:kube-system,Attempt:3,}" May 10 00:07:25.905727 containerd[1455]: time="2025-05-10T00:07:25.905509555Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5c95969b9-5mpjw,Uid:2aab4214-d322-46c1-9e38-01e24fa563db,Namespace:calico-system,Attempt:3,}" May 10 00:07:25.905807 containerd[1455]: time="2025-05-10T00:07:25.905776840Z" level=info msg="StopPodSandbox for \"a6b674e652fda6143482986bbb98cab22b4708195d47f2f09d4d689271b239d8\"" May 10 00:07:25.906180 containerd[1455]: time="2025-05-10T00:07:25.906140647Z" level=info msg="Ensure that sandbox a6b674e652fda6143482986bbb98cab22b4708195d47f2f09d4d689271b239d8 in task-service has been cleanup successfully" May 10 00:07:25.909101 containerd[1455]: time="2025-05-10T00:07:25.908196646Z" level=info msg="TearDown network for sandbox \"a6b674e652fda6143482986bbb98cab22b4708195d47f2f09d4d689271b239d8\" successfully" May 10 00:07:25.909101 containerd[1455]: time="2025-05-10T00:07:25.908225526Z" level=info msg="StopPodSandbox for \"a6b674e652fda6143482986bbb98cab22b4708195d47f2f09d4d689271b239d8\" returns successfully" May 10 00:07:25.908501 systemd[1]: run-netns-cni\x2d7f654b41\x2dc7a0\x2dc9ca\x2dabb9\x2defb0d13093c3.mount: Deactivated successfully. May 10 00:07:25.910937 containerd[1455]: time="2025-05-10T00:07:25.910908018Z" level=info msg="StopPodSandbox for \"d8e20ff03e6b87cc110d5ef53e8031281a309ea47a8f40733ba643d34a2ee36b\"" May 10 00:07:25.911735 containerd[1455]: time="2025-05-10T00:07:25.911649632Z" level=info msg="TearDown network for sandbox \"d8e20ff03e6b87cc110d5ef53e8031281a309ea47a8f40733ba643d34a2ee36b\" successfully" May 10 00:07:25.911940 containerd[1455]: time="2025-05-10T00:07:25.911867356Z" level=info msg="StopPodSandbox for \"d8e20ff03e6b87cc110d5ef53e8031281a309ea47a8f40733ba643d34a2ee36b\" returns successfully" May 10 00:07:25.912546 containerd[1455]: time="2025-05-10T00:07:25.912451287Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6hd28,Uid:56542ad7-7f48-4051-ae36-d7536ab16d6e,Namespace:calico-system,Attempt:2,}" May 10 00:07:25.913161 kubelet[2619]: I0510 00:07:25.912978 2619 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c7f4833db2793d7a6f2d5af34885807a8151ca7d4eac5a79d35a0b094c420867" May 10 00:07:25.915171 containerd[1455]: time="2025-05-10T00:07:25.914977295Z" level=info msg="StopPodSandbox for \"c7f4833db2793d7a6f2d5af34885807a8151ca7d4eac5a79d35a0b094c420867\"" May 10 00:07:25.916463 containerd[1455]: time="2025-05-10T00:07:25.915276581Z" level=info msg="Ensure that sandbox c7f4833db2793d7a6f2d5af34885807a8151ca7d4eac5a79d35a0b094c420867 in task-service has been cleanup successfully" May 10 00:07:25.916463 containerd[1455]: time="2025-05-10T00:07:25.916442123Z" level=info msg="TearDown network for sandbox \"c7f4833db2793d7a6f2d5af34885807a8151ca7d4eac5a79d35a0b094c420867\" successfully" May 10 00:07:25.916463 containerd[1455]: time="2025-05-10T00:07:25.916461924Z" level=info msg="StopPodSandbox for \"c7f4833db2793d7a6f2d5af34885807a8151ca7d4eac5a79d35a0b094c420867\" returns successfully" May 10 00:07:25.917024 containerd[1455]: time="2025-05-10T00:07:25.916929613Z" level=info msg="StopPodSandbox for \"b8915781d76c7e4bed54a6320a2ec6c1e2689e181c08dce871146a3ef92a58c5\"" May 10 00:07:25.917024 containerd[1455]: time="2025-05-10T00:07:25.917001134Z" level=info msg="TearDown network for sandbox \"b8915781d76c7e4bed54a6320a2ec6c1e2689e181c08dce871146a3ef92a58c5\" successfully" May 10 00:07:25.917024 containerd[1455]: time="2025-05-10T00:07:25.917011334Z" level=info msg="StopPodSandbox for \"b8915781d76c7e4bed54a6320a2ec6c1e2689e181c08dce871146a3ef92a58c5\" returns successfully" May 10 00:07:25.918434 containerd[1455]: time="2025-05-10T00:07:25.917743948Z" level=info msg="StopPodSandbox for \"bbdc02b92325d3eaa887dc6c4a0b35702fe8ec0e0c9d4ff1a6cb0c619f3dbad6\"" May 10 00:07:25.918434 containerd[1455]: time="2025-05-10T00:07:25.917831190Z" level=info msg="TearDown network for sandbox \"bbdc02b92325d3eaa887dc6c4a0b35702fe8ec0e0c9d4ff1a6cb0c619f3dbad6\" successfully" May 10 00:07:25.918905 kubelet[2619]: I0510 00:07:25.918887 2619 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9b9b5a784923698865b82f134ee4f04dece9c7e4b3a822a09639506a16350baf" May 10 00:07:25.998067 containerd[1455]: time="2025-05-10T00:07:25.917857270Z" level=info msg="StopPodSandbox for \"bbdc02b92325d3eaa887dc6c4a0b35702fe8ec0e0c9d4ff1a6cb0c619f3dbad6\" returns successfully" May 10 00:07:25.998203 containerd[1455]: time="2025-05-10T00:07:25.919575863Z" level=info msg="StopPodSandbox for \"9b9b5a784923698865b82f134ee4f04dece9c7e4b3a822a09639506a16350baf\"" May 10 00:07:25.998486 containerd[1455]: time="2025-05-10T00:07:25.998295726Z" level=info msg="Ensure that sandbox 9b9b5a784923698865b82f134ee4f04dece9c7e4b3a822a09639506a16350baf in task-service has been cleanup successfully" May 10 00:07:25.998486 containerd[1455]: time="2025-05-10T00:07:25.998476569Z" level=info msg="TearDown network for sandbox \"9b9b5a784923698865b82f134ee4f04dece9c7e4b3a822a09639506a16350baf\" successfully" May 10 00:07:25.998556 containerd[1455]: time="2025-05-10T00:07:25.998490849Z" level=info msg="StopPodSandbox for \"9b9b5a784923698865b82f134ee4f04dece9c7e4b3a822a09639506a16350baf\" returns successfully" May 10 00:07:25.999005 containerd[1455]: time="2025-05-10T00:07:25.998971939Z" level=info msg="StopPodSandbox for \"f86a0e3b112b07a79058fb2b35d8a86ac28a6700bb9fd7b39024c80c34de9db5\"" May 10 00:07:25.999101 containerd[1455]: time="2025-05-10T00:07:25.999080981Z" level=info msg="TearDown network for sandbox \"f86a0e3b112b07a79058fb2b35d8a86ac28a6700bb9fd7b39024c80c34de9db5\" successfully" May 10 00:07:25.999101 containerd[1455]: time="2025-05-10T00:07:25.999096301Z" level=info msg="StopPodSandbox for \"f86a0e3b112b07a79058fb2b35d8a86ac28a6700bb9fd7b39024c80c34de9db5\" returns successfully" May 10 00:07:25.999423 containerd[1455]: time="2025-05-10T00:07:25.999393987Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c5c466cb8-7rrbw,Uid:4adb856a-b358-4a75-afdb-0a2493e0d860,Namespace:calico-apiserver,Attempt:3,}" May 10 00:07:26.000202 containerd[1455]: time="2025-05-10T00:07:25.999754273Z" level=info msg="StopPodSandbox for \"dec5dfa24dcd88a2cc1d3fb9acb23f7fa1da4cfaf9e23a890db6b276c61ff526\"" May 10 00:07:26.000480 containerd[1455]: time="2025-05-10T00:07:26.000374805Z" level=info msg="TearDown network for sandbox \"dec5dfa24dcd88a2cc1d3fb9acb23f7fa1da4cfaf9e23a890db6b276c61ff526\" successfully" May 10 00:07:26.000480 containerd[1455]: time="2025-05-10T00:07:26.000393006Z" level=info msg="StopPodSandbox for \"dec5dfa24dcd88a2cc1d3fb9acb23f7fa1da4cfaf9e23a890db6b276c61ff526\" returns successfully" May 10 00:07:26.001158 containerd[1455]: time="2025-05-10T00:07:26.001137060Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c5c466cb8-82f6f,Uid:01ce480b-3a6d-4fd5-af7a-73b802892ab1,Namespace:calico-apiserver,Attempt:3,}" May 10 00:07:26.143502 containerd[1455]: time="2025-05-10T00:07:26.143352406Z" level=error msg="Failed to destroy network for sandbox \"5fe645fe3987a1d20d67a620a0d4a65840958f7a9582f58c12c0c240cfaaac0a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:07:26.145480 containerd[1455]: time="2025-05-10T00:07:26.145413884Z" level=error msg="encountered an error cleaning up failed sandbox \"5fe645fe3987a1d20d67a620a0d4a65840958f7a9582f58c12c0c240cfaaac0a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:07:26.145719 containerd[1455]: time="2025-05-10T00:07:26.145590127Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-srzmr,Uid:5487806e-6495-4d4f-a191-df1e4f5aa0a8,Namespace:kube-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"5fe645fe3987a1d20d67a620a0d4a65840958f7a9582f58c12c0c240cfaaac0a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:07:26.146584 kubelet[2619]: E0510 00:07:26.146213 2619 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5fe645fe3987a1d20d67a620a0d4a65840958f7a9582f58c12c0c240cfaaac0a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:07:26.146584 kubelet[2619]: E0510 00:07:26.146277 2619 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5fe645fe3987a1d20d67a620a0d4a65840958f7a9582f58c12c0c240cfaaac0a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-srzmr" May 10 00:07:26.146584 kubelet[2619]: E0510 00:07:26.146298 2619 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5fe645fe3987a1d20d67a620a0d4a65840958f7a9582f58c12c0c240cfaaac0a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-srzmr" May 10 00:07:26.146748 kubelet[2619]: E0510 00:07:26.146341 2619 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-srzmr_kube-system(5487806e-6495-4d4f-a191-df1e4f5aa0a8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-srzmr_kube-system(5487806e-6495-4d4f-a191-df1e4f5aa0a8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5fe645fe3987a1d20d67a620a0d4a65840958f7a9582f58c12c0c240cfaaac0a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-srzmr" podUID="5487806e-6495-4d4f-a191-df1e4f5aa0a8" May 10 00:07:26.172004 containerd[1455]: time="2025-05-10T00:07:26.171951614Z" level=error msg="Failed to destroy network for sandbox \"6248d7d84def9df43cb3f82f6e35f960db2ce70adea2023b1a723767d45b9a28\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:07:26.172884 containerd[1455]: time="2025-05-10T00:07:26.171951734Z" level=error msg="Failed to destroy network for sandbox \"c10892becf096b3e637e8c8e3e15e660accd9f96cd370a33a82ed9e25392f545\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:07:26.174168 containerd[1455]: time="2025-05-10T00:07:26.174122814Z" level=error msg="encountered an error cleaning up failed sandbox \"6248d7d84def9df43cb3f82f6e35f960db2ce70adea2023b1a723767d45b9a28\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:07:26.174243 containerd[1455]: time="2025-05-10T00:07:26.174195696Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-kflv6,Uid:c06d69b8-4f38-4476-a0a9-074ed47a6924,Namespace:kube-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"6248d7d84def9df43cb3f82f6e35f960db2ce70adea2023b1a723767d45b9a28\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:07:26.174443 kubelet[2619]: E0510 00:07:26.174411 2619 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6248d7d84def9df43cb3f82f6e35f960db2ce70adea2023b1a723767d45b9a28\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:07:26.174578 kubelet[2619]: E0510 00:07:26.174560 2619 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6248d7d84def9df43cb3f82f6e35f960db2ce70adea2023b1a723767d45b9a28\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-kflv6" May 10 00:07:26.174665 kubelet[2619]: E0510 00:07:26.174650 2619 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6248d7d84def9df43cb3f82f6e35f960db2ce70adea2023b1a723767d45b9a28\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-kflv6" May 10 00:07:26.174771 kubelet[2619]: E0510 00:07:26.174747 2619 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-kflv6_kube-system(c06d69b8-4f38-4476-a0a9-074ed47a6924)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-kflv6_kube-system(c06d69b8-4f38-4476-a0a9-074ed47a6924)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6248d7d84def9df43cb3f82f6e35f960db2ce70adea2023b1a723767d45b9a28\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-kflv6" podUID="c06d69b8-4f38-4476-a0a9-074ed47a6924" May 10 00:07:26.176081 containerd[1455]: time="2025-05-10T00:07:26.176037050Z" level=error msg="Failed to destroy network for sandbox \"7c08b744992d50b0eb48bcaa61f1f37e2045d4b438d2967366eb5e31ffd433e6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:07:26.176390 containerd[1455]: time="2025-05-10T00:07:26.176356655Z" level=error msg="encountered an error cleaning up failed sandbox \"c10892becf096b3e637e8c8e3e15e660accd9f96cd370a33a82ed9e25392f545\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:07:26.176448 containerd[1455]: time="2025-05-10T00:07:26.176428217Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c5c466cb8-82f6f,Uid:01ce480b-3a6d-4fd5-af7a-73b802892ab1,Namespace:calico-apiserver,Attempt:3,} failed, error" error="failed to setup network for sandbox \"c10892becf096b3e637e8c8e3e15e660accd9f96cd370a33a82ed9e25392f545\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:07:26.176742 kubelet[2619]: E0510 00:07:26.176570 2619 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c10892becf096b3e637e8c8e3e15e660accd9f96cd370a33a82ed9e25392f545\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:07:26.176742 kubelet[2619]: E0510 00:07:26.176632 2619 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c10892becf096b3e637e8c8e3e15e660accd9f96cd370a33a82ed9e25392f545\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7c5c466cb8-82f6f" May 10 00:07:26.176742 kubelet[2619]: E0510 00:07:26.176651 2619 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c10892becf096b3e637e8c8e3e15e660accd9f96cd370a33a82ed9e25392f545\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7c5c466cb8-82f6f" May 10 00:07:26.176899 kubelet[2619]: E0510 00:07:26.176695 2619 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7c5c466cb8-82f6f_calico-apiserver(01ce480b-3a6d-4fd5-af7a-73b802892ab1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7c5c466cb8-82f6f_calico-apiserver(01ce480b-3a6d-4fd5-af7a-73b802892ab1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c10892becf096b3e637e8c8e3e15e660accd9f96cd370a33a82ed9e25392f545\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7c5c466cb8-82f6f" podUID="01ce480b-3a6d-4fd5-af7a-73b802892ab1" May 10 00:07:26.178128 containerd[1455]: time="2025-05-10T00:07:26.178093847Z" level=error msg="encountered an error cleaning up failed sandbox \"7c08b744992d50b0eb48bcaa61f1f37e2045d4b438d2967366eb5e31ffd433e6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:07:26.178430 containerd[1455]: time="2025-05-10T00:07:26.178349452Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c5c466cb8-7rrbw,Uid:4adb856a-b358-4a75-afdb-0a2493e0d860,Namespace:calico-apiserver,Attempt:3,} failed, error" error="failed to setup network for sandbox \"7c08b744992d50b0eb48bcaa61f1f37e2045d4b438d2967366eb5e31ffd433e6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:07:26.178705 kubelet[2619]: E0510 00:07:26.178590 2619 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7c08b744992d50b0eb48bcaa61f1f37e2045d4b438d2967366eb5e31ffd433e6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:07:26.178705 kubelet[2619]: E0510 00:07:26.178622 2619 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7c08b744992d50b0eb48bcaa61f1f37e2045d4b438d2967366eb5e31ffd433e6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7c5c466cb8-7rrbw" May 10 00:07:26.178705 kubelet[2619]: E0510 00:07:26.178637 2619 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7c08b744992d50b0eb48bcaa61f1f37e2045d4b438d2967366eb5e31ffd433e6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7c5c466cb8-7rrbw" May 10 00:07:26.178799 kubelet[2619]: E0510 00:07:26.178674 2619 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7c5c466cb8-7rrbw_calico-apiserver(4adb856a-b358-4a75-afdb-0a2493e0d860)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7c5c466cb8-7rrbw_calico-apiserver(4adb856a-b358-4a75-afdb-0a2493e0d860)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7c08b744992d50b0eb48bcaa61f1f37e2045d4b438d2967366eb5e31ffd433e6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7c5c466cb8-7rrbw" podUID="4adb856a-b358-4a75-afdb-0a2493e0d860" May 10 00:07:26.179388 containerd[1455]: time="2025-05-10T00:07:26.179067585Z" level=error msg="Failed to destroy network for sandbox \"debec55208a2cc9bc249cb92a2a39ba8daf6d29ff9afee61a0d7be1d21026085\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:07:26.179727 containerd[1455]: time="2025-05-10T00:07:26.179698517Z" level=error msg="encountered an error cleaning up failed sandbox \"debec55208a2cc9bc249cb92a2a39ba8daf6d29ff9afee61a0d7be1d21026085\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:07:26.179925 containerd[1455]: time="2025-05-10T00:07:26.179902761Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5c95969b9-5mpjw,Uid:2aab4214-d322-46c1-9e38-01e24fa563db,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"debec55208a2cc9bc249cb92a2a39ba8daf6d29ff9afee61a0d7be1d21026085\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:07:26.180373 kubelet[2619]: E0510 00:07:26.180254 2619 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"debec55208a2cc9bc249cb92a2a39ba8daf6d29ff9afee61a0d7be1d21026085\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:07:26.180373 kubelet[2619]: E0510 00:07:26.180302 2619 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"debec55208a2cc9bc249cb92a2a39ba8daf6d29ff9afee61a0d7be1d21026085\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5c95969b9-5mpjw" May 10 00:07:26.180373 kubelet[2619]: E0510 00:07:26.180318 2619 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"debec55208a2cc9bc249cb92a2a39ba8daf6d29ff9afee61a0d7be1d21026085\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5c95969b9-5mpjw" May 10 00:07:26.180481 kubelet[2619]: E0510 00:07:26.180342 2619 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5c95969b9-5mpjw_calico-system(2aab4214-d322-46c1-9e38-01e24fa563db)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5c95969b9-5mpjw_calico-system(2aab4214-d322-46c1-9e38-01e24fa563db)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"debec55208a2cc9bc249cb92a2a39ba8daf6d29ff9afee61a0d7be1d21026085\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5c95969b9-5mpjw" podUID="2aab4214-d322-46c1-9e38-01e24fa563db" May 10 00:07:26.186876 containerd[1455]: time="2025-05-10T00:07:26.186699086Z" level=error msg="Failed to destroy network for sandbox \"5ee17de608faef1c8f0040997cbdbb8a3f2ec849e89150d359d947ec5c3ee46d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:07:26.187552 containerd[1455]: time="2025-05-10T00:07:26.187369899Z" level=error msg="encountered an error cleaning up failed sandbox \"5ee17de608faef1c8f0040997cbdbb8a3f2ec849e89150d359d947ec5c3ee46d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:07:26.187552 containerd[1455]: time="2025-05-10T00:07:26.187424900Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6hd28,Uid:56542ad7-7f48-4051-ae36-d7536ab16d6e,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"5ee17de608faef1c8f0040997cbdbb8a3f2ec849e89150d359d947ec5c3ee46d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:07:26.187654 kubelet[2619]: E0510 00:07:26.187565 2619 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5ee17de608faef1c8f0040997cbdbb8a3f2ec849e89150d359d947ec5c3ee46d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:07:26.187654 kubelet[2619]: E0510 00:07:26.187599 2619 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5ee17de608faef1c8f0040997cbdbb8a3f2ec849e89150d359d947ec5c3ee46d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-6hd28" May 10 00:07:26.187654 kubelet[2619]: E0510 00:07:26.187614 2619 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5ee17de608faef1c8f0040997cbdbb8a3f2ec849e89150d359d947ec5c3ee46d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-6hd28" May 10 00:07:26.187728 kubelet[2619]: E0510 00:07:26.187646 2619 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-6hd28_calico-system(56542ad7-7f48-4051-ae36-d7536ab16d6e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-6hd28_calico-system(56542ad7-7f48-4051-ae36-d7536ab16d6e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5ee17de608faef1c8f0040997cbdbb8a3f2ec849e89150d359d947ec5c3ee46d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-6hd28" podUID="56542ad7-7f48-4051-ae36-d7536ab16d6e" May 10 00:07:26.307771 containerd[1455]: time="2025-05-10T00:07:26.307724041Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:07:26.308574 containerd[1455]: time="2025-05-10T00:07:26.308381693Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.3: active requests=0, bytes read=138981893" May 10 00:07:26.309509 containerd[1455]: time="2025-05-10T00:07:26.309437473Z" level=info msg="ImageCreate event name:\"sha256:cdcce3ec4624a24c28cdc07b0ee29ddf6703628edee7452a3f8a8b4816bfd057\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:07:26.314214 containerd[1455]: time="2025-05-10T00:07:26.313421667Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:07:26.314338 containerd[1455]: time="2025-05-10T00:07:26.314257002Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.3\" with image id \"sha256:cdcce3ec4624a24c28cdc07b0ee29ddf6703628edee7452a3f8a8b4816bfd057\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\", size \"138981755\" in 3.468751132s" May 10 00:07:26.314338 containerd[1455]: time="2025-05-10T00:07:26.314317123Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\" returns image reference \"sha256:cdcce3ec4624a24c28cdc07b0ee29ddf6703628edee7452a3f8a8b4816bfd057\"" May 10 00:07:26.322336 containerd[1455]: time="2025-05-10T00:07:26.322295430Z" level=info msg="CreateContainer within sandbox \"55a441e92168f398028650681957856976323d88c1c33dfb8c998aab18ec28d0\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" May 10 00:07:26.341660 containerd[1455]: time="2025-05-10T00:07:26.341600587Z" level=info msg="CreateContainer within sandbox \"55a441e92168f398028650681957856976323d88c1c33dfb8c998aab18ec28d0\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"533fdadbb11ee8549be5349639c04c6a8a9e32d32d7b9e819853788bb4da24c3\"" May 10 00:07:26.342486 containerd[1455]: time="2025-05-10T00:07:26.342426402Z" level=info msg="StartContainer for \"533fdadbb11ee8549be5349639c04c6a8a9e32d32d7b9e819853788bb4da24c3\"" May 10 00:07:26.415093 systemd[1]: Started cri-containerd-533fdadbb11ee8549be5349639c04c6a8a9e32d32d7b9e819853788bb4da24c3.scope - libcontainer container 533fdadbb11ee8549be5349639c04c6a8a9e32d32d7b9e819853788bb4da24c3. May 10 00:07:26.443857 containerd[1455]: time="2025-05-10T00:07:26.443785514Z" level=info msg="StartContainer for \"533fdadbb11ee8549be5349639c04c6a8a9e32d32d7b9e819853788bb4da24c3\" returns successfully" May 10 00:07:26.672263 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. May 10 00:07:26.672403 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. May 10 00:07:26.697610 systemd[1]: run-netns-cni\x2d003f2136\x2d821f\x2d158e\x2db4e9\x2def75fb1d17e9.mount: Deactivated successfully. May 10 00:07:26.697695 systemd[1]: run-netns-cni\x2d46d8a687\x2da510\x2d8830\x2d6b4a\x2da00c7a04f67b.mount: Deactivated successfully. May 10 00:07:26.697746 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1340031539.mount: Deactivated successfully. May 10 00:07:26.927679 kubelet[2619]: E0510 00:07:26.927646 2619 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:07:26.932628 kubelet[2619]: I0510 00:07:26.931612 2619 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="debec55208a2cc9bc249cb92a2a39ba8daf6d29ff9afee61a0d7be1d21026085" May 10 00:07:26.932729 containerd[1455]: time="2025-05-10T00:07:26.932478699Z" level=info msg="StopPodSandbox for \"debec55208a2cc9bc249cb92a2a39ba8daf6d29ff9afee61a0d7be1d21026085\"" May 10 00:07:26.932729 containerd[1455]: time="2025-05-10T00:07:26.932656662Z" level=info msg="Ensure that sandbox debec55208a2cc9bc249cb92a2a39ba8daf6d29ff9afee61a0d7be1d21026085 in task-service has been cleanup successfully" May 10 00:07:26.934881 systemd[1]: run-netns-cni\x2d8a543cde\x2d3f07\x2d3e6c\x2df633\x2d7f4a8e284f39.mount: Deactivated successfully. May 10 00:07:26.937310 containerd[1455]: time="2025-05-10T00:07:26.936038484Z" level=info msg="TearDown network for sandbox \"debec55208a2cc9bc249cb92a2a39ba8daf6d29ff9afee61a0d7be1d21026085\" successfully" May 10 00:07:26.937310 containerd[1455]: time="2025-05-10T00:07:26.936078765Z" level=info msg="StopPodSandbox for \"debec55208a2cc9bc249cb92a2a39ba8daf6d29ff9afee61a0d7be1d21026085\" returns successfully" May 10 00:07:26.937310 containerd[1455]: time="2025-05-10T00:07:26.936767818Z" level=info msg="StopPodSandbox for \"8375f65da33d850ff54f70b94cd7018dd499e5b3c4ef8c1d854bbe92b59ccdf4\"" May 10 00:07:26.937310 containerd[1455]: time="2025-05-10T00:07:26.936954181Z" level=info msg="TearDown network for sandbox \"8375f65da33d850ff54f70b94cd7018dd499e5b3c4ef8c1d854bbe92b59ccdf4\" successfully" May 10 00:07:26.937310 containerd[1455]: time="2025-05-10T00:07:26.936969501Z" level=info msg="StopPodSandbox for \"8375f65da33d850ff54f70b94cd7018dd499e5b3c4ef8c1d854bbe92b59ccdf4\" returns successfully" May 10 00:07:26.938099 containerd[1455]: time="2025-05-10T00:07:26.938039761Z" level=info msg="StopPodSandbox for \"e4ce826178bb2d8ffada47f1964be8acc17d8dddbf917d8a0037110976431a65\"" May 10 00:07:26.938838 containerd[1455]: time="2025-05-10T00:07:26.938667933Z" level=info msg="TearDown network for sandbox \"e4ce826178bb2d8ffada47f1964be8acc17d8dddbf917d8a0037110976431a65\" successfully" May 10 00:07:26.938838 containerd[1455]: time="2025-05-10T00:07:26.938796335Z" level=info msg="StopPodSandbox for \"e4ce826178bb2d8ffada47f1964be8acc17d8dddbf917d8a0037110976431a65\" returns successfully" May 10 00:07:26.939721 kubelet[2619]: I0510 00:07:26.939024 2619 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5ee17de608faef1c8f0040997cbdbb8a3f2ec849e89150d359d947ec5c3ee46d" May 10 00:07:26.939871 containerd[1455]: time="2025-05-10T00:07:26.939613510Z" level=info msg="StopPodSandbox for \"91b203a3ea9387e43e18d103f3481a84640baa460cfb6d17599a8f65d8a29670\"" May 10 00:07:26.939871 containerd[1455]: time="2025-05-10T00:07:26.939688712Z" level=info msg="TearDown network for sandbox \"91b203a3ea9387e43e18d103f3481a84640baa460cfb6d17599a8f65d8a29670\" successfully" May 10 00:07:26.939871 containerd[1455]: time="2025-05-10T00:07:26.939698272Z" level=info msg="StopPodSandbox for \"91b203a3ea9387e43e18d103f3481a84640baa460cfb6d17599a8f65d8a29670\" returns successfully" May 10 00:07:26.939871 containerd[1455]: time="2025-05-10T00:07:26.939745833Z" level=info msg="StopPodSandbox for \"5ee17de608faef1c8f0040997cbdbb8a3f2ec849e89150d359d947ec5c3ee46d\"" May 10 00:07:26.940131 containerd[1455]: time="2025-05-10T00:07:26.940010718Z" level=info msg="Ensure that sandbox 5ee17de608faef1c8f0040997cbdbb8a3f2ec849e89150d359d947ec5c3ee46d in task-service has been cleanup successfully" May 10 00:07:26.941211 containerd[1455]: time="2025-05-10T00:07:26.941179019Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5c95969b9-5mpjw,Uid:2aab4214-d322-46c1-9e38-01e24fa563db,Namespace:calico-system,Attempt:4,}" May 10 00:07:26.942565 systemd[1]: run-netns-cni\x2d7b259e2a\x2df628\x2dc7f8\x2dd27f\x2d4579ed80867d.mount: Deactivated successfully. May 10 00:07:26.942888 containerd[1455]: time="2025-05-10T00:07:26.942616326Z" level=info msg="TearDown network for sandbox \"5ee17de608faef1c8f0040997cbdbb8a3f2ec849e89150d359d947ec5c3ee46d\" successfully" May 10 00:07:26.942888 containerd[1455]: time="2025-05-10T00:07:26.942634446Z" level=info msg="StopPodSandbox for \"5ee17de608faef1c8f0040997cbdbb8a3f2ec849e89150d359d947ec5c3ee46d\" returns successfully" May 10 00:07:26.943631 containerd[1455]: time="2025-05-10T00:07:26.943020333Z" level=info msg="StopPodSandbox for \"a6b674e652fda6143482986bbb98cab22b4708195d47f2f09d4d689271b239d8\"" May 10 00:07:26.943631 containerd[1455]: time="2025-05-10T00:07:26.943103535Z" level=info msg="TearDown network for sandbox \"a6b674e652fda6143482986bbb98cab22b4708195d47f2f09d4d689271b239d8\" successfully" May 10 00:07:26.943631 containerd[1455]: time="2025-05-10T00:07:26.943113615Z" level=info msg="StopPodSandbox for \"a6b674e652fda6143482986bbb98cab22b4708195d47f2f09d4d689271b239d8\" returns successfully" May 10 00:07:26.945372 containerd[1455]: time="2025-05-10T00:07:26.945329416Z" level=info msg="StopPodSandbox for \"d8e20ff03e6b87cc110d5ef53e8031281a309ea47a8f40733ba643d34a2ee36b\"" May 10 00:07:26.945437 containerd[1455]: time="2025-05-10T00:07:26.945428178Z" level=info msg="TearDown network for sandbox \"d8e20ff03e6b87cc110d5ef53e8031281a309ea47a8f40733ba643d34a2ee36b\" successfully" May 10 00:07:26.945474 containerd[1455]: time="2025-05-10T00:07:26.945439858Z" level=info msg="StopPodSandbox for \"d8e20ff03e6b87cc110d5ef53e8031281a309ea47a8f40733ba643d34a2ee36b\" returns successfully" May 10 00:07:26.945902 kubelet[2619]: I0510 00:07:26.945870 2619 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7c08b744992d50b0eb48bcaa61f1f37e2045d4b438d2967366eb5e31ffd433e6" May 10 00:07:26.947603 containerd[1455]: time="2025-05-10T00:07:26.947557937Z" level=info msg="StopPodSandbox for \"7c08b744992d50b0eb48bcaa61f1f37e2045d4b438d2967366eb5e31ffd433e6\"" May 10 00:07:26.947824 containerd[1455]: time="2025-05-10T00:07:26.947594898Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6hd28,Uid:56542ad7-7f48-4051-ae36-d7536ab16d6e,Namespace:calico-system,Attempt:3,}" May 10 00:07:26.947824 containerd[1455]: time="2025-05-10T00:07:26.947756581Z" level=info msg="Ensure that sandbox 7c08b744992d50b0eb48bcaa61f1f37e2045d4b438d2967366eb5e31ffd433e6 in task-service has been cleanup successfully" May 10 00:07:26.951370 containerd[1455]: time="2025-05-10T00:07:26.949973662Z" level=info msg="TearDown network for sandbox \"7c08b744992d50b0eb48bcaa61f1f37e2045d4b438d2967366eb5e31ffd433e6\" successfully" May 10 00:07:26.951370 containerd[1455]: time="2025-05-10T00:07:26.950019142Z" level=info msg="StopPodSandbox for \"7c08b744992d50b0eb48bcaa61f1f37e2045d4b438d2967366eb5e31ffd433e6\" returns successfully" May 10 00:07:26.951370 containerd[1455]: time="2025-05-10T00:07:26.950443590Z" level=info msg="StopPodSandbox for \"c7f4833db2793d7a6f2d5af34885807a8151ca7d4eac5a79d35a0b094c420867\"" May 10 00:07:26.951370 containerd[1455]: time="2025-05-10T00:07:26.950523272Z" level=info msg="TearDown network for sandbox \"c7f4833db2793d7a6f2d5af34885807a8151ca7d4eac5a79d35a0b094c420867\" successfully" May 10 00:07:26.951370 containerd[1455]: time="2025-05-10T00:07:26.950532952Z" level=info msg="StopPodSandbox for \"c7f4833db2793d7a6f2d5af34885807a8151ca7d4eac5a79d35a0b094c420867\" returns successfully" May 10 00:07:26.951011 systemd[1]: run-netns-cni\x2dcf45b8d6\x2dbb3e\x2d4c05\x2dd5a5\x2d73025e5ea27a.mount: Deactivated successfully. May 10 00:07:26.951628 containerd[1455]: time="2025-05-10T00:07:26.951584971Z" level=info msg="StopPodSandbox for \"b8915781d76c7e4bed54a6320a2ec6c1e2689e181c08dce871146a3ef92a58c5\"" May 10 00:07:26.953741 kubelet[2619]: I0510 00:07:26.953002 2619 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c10892becf096b3e637e8c8e3e15e660accd9f96cd370a33a82ed9e25392f545" May 10 00:07:26.953840 containerd[1455]: time="2025-05-10T00:07:26.953003598Z" level=info msg="TearDown network for sandbox \"b8915781d76c7e4bed54a6320a2ec6c1e2689e181c08dce871146a3ef92a58c5\" successfully" May 10 00:07:26.953840 containerd[1455]: time="2025-05-10T00:07:26.953028958Z" level=info msg="StopPodSandbox for \"b8915781d76c7e4bed54a6320a2ec6c1e2689e181c08dce871146a3ef92a58c5\" returns successfully" May 10 00:07:26.953840 containerd[1455]: time="2025-05-10T00:07:26.953467726Z" level=info msg="StopPodSandbox for \"bbdc02b92325d3eaa887dc6c4a0b35702fe8ec0e0c9d4ff1a6cb0c619f3dbad6\"" May 10 00:07:26.953840 containerd[1455]: time="2025-05-10T00:07:26.953563408Z" level=info msg="TearDown network for sandbox \"bbdc02b92325d3eaa887dc6c4a0b35702fe8ec0e0c9d4ff1a6cb0c619f3dbad6\" successfully" May 10 00:07:26.953840 containerd[1455]: time="2025-05-10T00:07:26.953573688Z" level=info msg="StopPodSandbox for \"bbdc02b92325d3eaa887dc6c4a0b35702fe8ec0e0c9d4ff1a6cb0c619f3dbad6\" returns successfully" May 10 00:07:26.953840 containerd[1455]: time="2025-05-10T00:07:26.953748251Z" level=info msg="StopPodSandbox for \"c10892becf096b3e637e8c8e3e15e660accd9f96cd370a33a82ed9e25392f545\"" May 10 00:07:26.954407 containerd[1455]: time="2025-05-10T00:07:26.954253941Z" level=info msg="Ensure that sandbox c10892becf096b3e637e8c8e3e15e660accd9f96cd370a33a82ed9e25392f545 in task-service has been cleanup successfully" May 10 00:07:26.954554 containerd[1455]: time="2025-05-10T00:07:26.954319302Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c5c466cb8-7rrbw,Uid:4adb856a-b358-4a75-afdb-0a2493e0d860,Namespace:calico-apiserver,Attempt:4,}" May 10 00:07:26.956688 containerd[1455]: time="2025-05-10T00:07:26.954745190Z" level=info msg="TearDown network for sandbox \"c10892becf096b3e637e8c8e3e15e660accd9f96cd370a33a82ed9e25392f545\" successfully" May 10 00:07:26.956688 containerd[1455]: time="2025-05-10T00:07:26.954770470Z" level=info msg="StopPodSandbox for \"c10892becf096b3e637e8c8e3e15e660accd9f96cd370a33a82ed9e25392f545\" returns successfully" May 10 00:07:26.956688 containerd[1455]: time="2025-05-10T00:07:26.956336499Z" level=info msg="StopPodSandbox for \"9b9b5a784923698865b82f134ee4f04dece9c7e4b3a822a09639506a16350baf\"" May 10 00:07:26.956688 containerd[1455]: time="2025-05-10T00:07:26.956435901Z" level=info msg="TearDown network for sandbox \"9b9b5a784923698865b82f134ee4f04dece9c7e4b3a822a09639506a16350baf\" successfully" May 10 00:07:26.956688 containerd[1455]: time="2025-05-10T00:07:26.956447141Z" level=info msg="StopPodSandbox for \"9b9b5a784923698865b82f134ee4f04dece9c7e4b3a822a09639506a16350baf\" returns successfully" May 10 00:07:26.957179 systemd[1]: run-netns-cni\x2dbd954120\x2d2ebc\x2d6210\x2d0caa\x2d77b4d63d0e58.mount: Deactivated successfully. May 10 00:07:26.957645 containerd[1455]: time="2025-05-10T00:07:26.957448760Z" level=info msg="StopPodSandbox for \"f86a0e3b112b07a79058fb2b35d8a86ac28a6700bb9fd7b39024c80c34de9db5\"" May 10 00:07:26.957645 containerd[1455]: time="2025-05-10T00:07:26.957530401Z" level=info msg="TearDown network for sandbox \"f86a0e3b112b07a79058fb2b35d8a86ac28a6700bb9fd7b39024c80c34de9db5\" successfully" May 10 00:07:26.957645 containerd[1455]: time="2025-05-10T00:07:26.957540601Z" level=info msg="StopPodSandbox for \"f86a0e3b112b07a79058fb2b35d8a86ac28a6700bb9fd7b39024c80c34de9db5\" returns successfully" May 10 00:07:26.959161 containerd[1455]: time="2025-05-10T00:07:26.959088750Z" level=info msg="StopPodSandbox for \"dec5dfa24dcd88a2cc1d3fb9acb23f7fa1da4cfaf9e23a890db6b276c61ff526\"" May 10 00:07:26.959335 containerd[1455]: time="2025-05-10T00:07:26.959188592Z" level=info msg="TearDown network for sandbox \"dec5dfa24dcd88a2cc1d3fb9acb23f7fa1da4cfaf9e23a890db6b276c61ff526\" successfully" May 10 00:07:26.959335 containerd[1455]: time="2025-05-10T00:07:26.959200632Z" level=info msg="StopPodSandbox for \"dec5dfa24dcd88a2cc1d3fb9acb23f7fa1da4cfaf9e23a890db6b276c61ff526\" returns successfully" May 10 00:07:26.959383 kubelet[2619]: I0510 00:07:26.959144 2619 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5fe645fe3987a1d20d67a620a0d4a65840958f7a9582f58c12c0c240cfaaac0a" May 10 00:07:26.960315 containerd[1455]: time="2025-05-10T00:07:26.959751602Z" level=info msg="StopPodSandbox for \"5fe645fe3987a1d20d67a620a0d4a65840958f7a9582f58c12c0c240cfaaac0a\"" May 10 00:07:26.960315 containerd[1455]: time="2025-05-10T00:07:26.959822963Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c5c466cb8-82f6f,Uid:01ce480b-3a6d-4fd5-af7a-73b802892ab1,Namespace:calico-apiserver,Attempt:4,}" May 10 00:07:26.960315 containerd[1455]: time="2025-05-10T00:07:26.960085208Z" level=info msg="Ensure that sandbox 5fe645fe3987a1d20d67a620a0d4a65840958f7a9582f58c12c0c240cfaaac0a in task-service has been cleanup successfully" May 10 00:07:26.961092 containerd[1455]: time="2025-05-10T00:07:26.960919984Z" level=info msg="TearDown network for sandbox \"5fe645fe3987a1d20d67a620a0d4a65840958f7a9582f58c12c0c240cfaaac0a\" successfully" May 10 00:07:26.961092 containerd[1455]: time="2025-05-10T00:07:26.961071867Z" level=info msg="StopPodSandbox for \"5fe645fe3987a1d20d67a620a0d4a65840958f7a9582f58c12c0c240cfaaac0a\" returns successfully" May 10 00:07:26.961929 containerd[1455]: time="2025-05-10T00:07:26.961742439Z" level=info msg="StopPodSandbox for \"adac49ec9355a7516d3e3e03c3d90bf95280ea4cc39e496c40aa1f106fe9b11b\"" May 10 00:07:26.961929 containerd[1455]: time="2025-05-10T00:07:26.961835161Z" level=info msg="TearDown network for sandbox \"adac49ec9355a7516d3e3e03c3d90bf95280ea4cc39e496c40aa1f106fe9b11b\" successfully" May 10 00:07:26.961929 containerd[1455]: time="2025-05-10T00:07:26.961856521Z" level=info msg="StopPodSandbox for \"adac49ec9355a7516d3e3e03c3d90bf95280ea4cc39e496c40aa1f106fe9b11b\" returns successfully" May 10 00:07:26.962756 containerd[1455]: time="2025-05-10T00:07:26.962366050Z" level=info msg="StopPodSandbox for \"25dbe61311c0d9faceee9b8b3cc0344ea5d46d787497e1c3a8891804cbfccca0\"" May 10 00:07:26.962756 containerd[1455]: time="2025-05-10T00:07:26.962458252Z" level=info msg="TearDown network for sandbox \"25dbe61311c0d9faceee9b8b3cc0344ea5d46d787497e1c3a8891804cbfccca0\" successfully" May 10 00:07:26.962756 containerd[1455]: time="2025-05-10T00:07:26.962469772Z" level=info msg="StopPodSandbox for \"25dbe61311c0d9faceee9b8b3cc0344ea5d46d787497e1c3a8891804cbfccca0\" returns successfully" May 10 00:07:26.962915 containerd[1455]: time="2025-05-10T00:07:26.962766658Z" level=info msg="StopPodSandbox for \"c19924cbf291cfedb2344018f82e64d64b01b1d95b185c4d2455bf23ec598274\"" May 10 00:07:26.962915 containerd[1455]: time="2025-05-10T00:07:26.962860980Z" level=info msg="TearDown network for sandbox \"c19924cbf291cfedb2344018f82e64d64b01b1d95b185c4d2455bf23ec598274\" successfully" May 10 00:07:26.962915 containerd[1455]: time="2025-05-10T00:07:26.962873140Z" level=info msg="StopPodSandbox for \"c19924cbf291cfedb2344018f82e64d64b01b1d95b185c4d2455bf23ec598274\" returns successfully" May 10 00:07:26.963773 kubelet[2619]: E0510 00:07:26.963016 2619 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:07:26.963773 kubelet[2619]: I0510 00:07:26.963199 2619 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6248d7d84def9df43cb3f82f6e35f960db2ce70adea2023b1a723767d45b9a28" May 10 00:07:26.964440 containerd[1455]: time="2025-05-10T00:07:26.963563913Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-srzmr,Uid:5487806e-6495-4d4f-a191-df1e4f5aa0a8,Namespace:kube-system,Attempt:4,}" May 10 00:07:26.964440 containerd[1455]: time="2025-05-10T00:07:26.963610353Z" level=info msg="StopPodSandbox for \"6248d7d84def9df43cb3f82f6e35f960db2ce70adea2023b1a723767d45b9a28\"" May 10 00:07:26.964440 containerd[1455]: time="2025-05-10T00:07:26.963776636Z" level=info msg="Ensure that sandbox 6248d7d84def9df43cb3f82f6e35f960db2ce70adea2023b1a723767d45b9a28 in task-service has been cleanup successfully" May 10 00:07:26.964440 containerd[1455]: time="2025-05-10T00:07:26.964101522Z" level=info msg="TearDown network for sandbox \"6248d7d84def9df43cb3f82f6e35f960db2ce70adea2023b1a723767d45b9a28\" successfully" May 10 00:07:26.964440 containerd[1455]: time="2025-05-10T00:07:26.964121363Z" level=info msg="StopPodSandbox for \"6248d7d84def9df43cb3f82f6e35f960db2ce70adea2023b1a723767d45b9a28\" returns successfully" May 10 00:07:26.965739 containerd[1455]: time="2025-05-10T00:07:26.964826056Z" level=info msg="StopPodSandbox for \"26134fe4490181fb0fe5e82b431ffdd239e1bc587066208e5e65c9b1b6e1d435\"" May 10 00:07:26.965739 containerd[1455]: time="2025-05-10T00:07:26.965129421Z" level=info msg="TearDown network for sandbox \"26134fe4490181fb0fe5e82b431ffdd239e1bc587066208e5e65c9b1b6e1d435\" successfully" May 10 00:07:26.965739 containerd[1455]: time="2025-05-10T00:07:26.965143822Z" level=info msg="StopPodSandbox for \"26134fe4490181fb0fe5e82b431ffdd239e1bc587066208e5e65c9b1b6e1d435\" returns successfully" May 10 00:07:26.965739 containerd[1455]: time="2025-05-10T00:07:26.965492268Z" level=info msg="StopPodSandbox for \"921c8599351f2d8869fc9e2febf2d37494d7807cf2a91d98402fc75b09f82487\"" May 10 00:07:26.965739 containerd[1455]: time="2025-05-10T00:07:26.965561389Z" level=info msg="TearDown network for sandbox \"921c8599351f2d8869fc9e2febf2d37494d7807cf2a91d98402fc75b09f82487\" successfully" May 10 00:07:26.965739 containerd[1455]: time="2025-05-10T00:07:26.965570230Z" level=info msg="StopPodSandbox for \"921c8599351f2d8869fc9e2febf2d37494d7807cf2a91d98402fc75b09f82487\" returns successfully" May 10 00:07:26.967227 containerd[1455]: time="2025-05-10T00:07:26.966078919Z" level=info msg="StopPodSandbox for \"d17345d95100bf841394fddf6cec94d44cb83645858af93608efd0d4907222a4\"" May 10 00:07:26.967227 containerd[1455]: time="2025-05-10T00:07:26.966156960Z" level=info msg="TearDown network for sandbox \"d17345d95100bf841394fddf6cec94d44cb83645858af93608efd0d4907222a4\" successfully" May 10 00:07:26.967227 containerd[1455]: time="2025-05-10T00:07:26.966166961Z" level=info msg="StopPodSandbox for \"d17345d95100bf841394fddf6cec94d44cb83645858af93608efd0d4907222a4\" returns successfully" May 10 00:07:26.967227 containerd[1455]: time="2025-05-10T00:07:26.966608129Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-kflv6,Uid:c06d69b8-4f38-4476-a0a9-074ed47a6924,Namespace:kube-system,Attempt:4,}" May 10 00:07:26.967326 kubelet[2619]: E0510 00:07:26.966348 2619 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:07:27.452930 systemd-networkd[1386]: caliab3afd87926: Link UP May 10 00:07:27.454198 systemd-networkd[1386]: caliab3afd87926: Gained carrier May 10 00:07:27.464695 kubelet[2619]: I0510 00:07:27.463323 2619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-cczsl" podStartSLOduration=2.142956514 podStartE2EDuration="12.463304672s" podCreationTimestamp="2025-05-10 00:07:15 +0000 UTC" firstStartedPulling="2025-05-10 00:07:15.995133467 +0000 UTC m=+22.323066292" lastFinishedPulling="2025-05-10 00:07:26.315481625 +0000 UTC m=+32.643414450" observedRunningTime="2025-05-10 00:07:26.947427095 +0000 UTC m=+33.275359960" watchObservedRunningTime="2025-05-10 00:07:27.463304672 +0000 UTC m=+33.791237497" May 10 00:07:27.469961 containerd[1455]: 2025-05-10 00:07:27.057 [INFO][4345] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 10 00:07:27.469961 containerd[1455]: 2025-05-10 00:07:27.136 [INFO][4345] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7db6d8ff4d--srzmr-eth0 coredns-7db6d8ff4d- kube-system 5487806e-6495-4d4f-a191-df1e4f5aa0a8 726 0 2025-05-10 00:07:09 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7db6d8ff4d-srzmr eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] caliab3afd87926 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="e2c2d61dd0397f0e7710d72bf650796a7b014ab398856931cd357d7efe3c8d92" Namespace="kube-system" Pod="coredns-7db6d8ff4d-srzmr" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--srzmr-" May 10 00:07:27.469961 containerd[1455]: 2025-05-10 00:07:27.136 [INFO][4345] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="e2c2d61dd0397f0e7710d72bf650796a7b014ab398856931cd357d7efe3c8d92" Namespace="kube-system" Pod="coredns-7db6d8ff4d-srzmr" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--srzmr-eth0" May 10 00:07:27.469961 containerd[1455]: 2025-05-10 00:07:27.383 [INFO][4424] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e2c2d61dd0397f0e7710d72bf650796a7b014ab398856931cd357d7efe3c8d92" HandleID="k8s-pod-network.e2c2d61dd0397f0e7710d72bf650796a7b014ab398856931cd357d7efe3c8d92" Workload="localhost-k8s-coredns--7db6d8ff4d--srzmr-eth0" May 10 00:07:27.469961 containerd[1455]: 2025-05-10 00:07:27.398 [INFO][4424] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e2c2d61dd0397f0e7710d72bf650796a7b014ab398856931cd357d7efe3c8d92" HandleID="k8s-pod-network.e2c2d61dd0397f0e7710d72bf650796a7b014ab398856931cd357d7efe3c8d92" Workload="localhost-k8s-coredns--7db6d8ff4d--srzmr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002f4940), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7db6d8ff4d-srzmr", "timestamp":"2025-05-10 00:07:27.383123798 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 10 00:07:27.469961 containerd[1455]: 2025-05-10 00:07:27.398 [INFO][4424] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 00:07:27.469961 containerd[1455]: 2025-05-10 00:07:27.398 [INFO][4424] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 00:07:27.469961 containerd[1455]: 2025-05-10 00:07:27.398 [INFO][4424] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 10 00:07:27.469961 containerd[1455]: 2025-05-10 00:07:27.401 [INFO][4424] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.e2c2d61dd0397f0e7710d72bf650796a7b014ab398856931cd357d7efe3c8d92" host="localhost" May 10 00:07:27.469961 containerd[1455]: 2025-05-10 00:07:27.414 [INFO][4424] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 10 00:07:27.469961 containerd[1455]: 2025-05-10 00:07:27.418 [INFO][4424] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 10 00:07:27.469961 containerd[1455]: 2025-05-10 00:07:27.420 [INFO][4424] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 10 00:07:27.469961 containerd[1455]: 2025-05-10 00:07:27.422 [INFO][4424] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 10 00:07:27.469961 containerd[1455]: 2025-05-10 00:07:27.422 [INFO][4424] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e2c2d61dd0397f0e7710d72bf650796a7b014ab398856931cd357d7efe3c8d92" host="localhost" May 10 00:07:27.469961 containerd[1455]: 2025-05-10 00:07:27.424 [INFO][4424] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.e2c2d61dd0397f0e7710d72bf650796a7b014ab398856931cd357d7efe3c8d92 May 10 00:07:27.469961 containerd[1455]: 2025-05-10 00:07:27.428 [INFO][4424] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e2c2d61dd0397f0e7710d72bf650796a7b014ab398856931cd357d7efe3c8d92" host="localhost" May 10 00:07:27.469961 containerd[1455]: 2025-05-10 00:07:27.437 [INFO][4424] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.e2c2d61dd0397f0e7710d72bf650796a7b014ab398856931cd357d7efe3c8d92" host="localhost" May 10 00:07:27.469961 containerd[1455]: 2025-05-10 00:07:27.437 [INFO][4424] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.e2c2d61dd0397f0e7710d72bf650796a7b014ab398856931cd357d7efe3c8d92" host="localhost" May 10 00:07:27.469961 containerd[1455]: 2025-05-10 00:07:27.437 [INFO][4424] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 00:07:27.469961 containerd[1455]: 2025-05-10 00:07:27.437 [INFO][4424] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="e2c2d61dd0397f0e7710d72bf650796a7b014ab398856931cd357d7efe3c8d92" HandleID="k8s-pod-network.e2c2d61dd0397f0e7710d72bf650796a7b014ab398856931cd357d7efe3c8d92" Workload="localhost-k8s-coredns--7db6d8ff4d--srzmr-eth0" May 10 00:07:27.471011 containerd[1455]: 2025-05-10 00:07:27.440 [INFO][4345] cni-plugin/k8s.go 386: Populated endpoint ContainerID="e2c2d61dd0397f0e7710d72bf650796a7b014ab398856931cd357d7efe3c8d92" Namespace="kube-system" Pod="coredns-7db6d8ff4d-srzmr" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--srzmr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--srzmr-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"5487806e-6495-4d4f-a191-df1e4f5aa0a8", ResourceVersion:"726", Generation:0, CreationTimestamp:time.Date(2025, time.May, 10, 0, 7, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7db6d8ff4d-srzmr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliab3afd87926", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 00:07:27.471011 containerd[1455]: 2025-05-10 00:07:27.440 [INFO][4345] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="e2c2d61dd0397f0e7710d72bf650796a7b014ab398856931cd357d7efe3c8d92" Namespace="kube-system" Pod="coredns-7db6d8ff4d-srzmr" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--srzmr-eth0" May 10 00:07:27.471011 containerd[1455]: 2025-05-10 00:07:27.440 [INFO][4345] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliab3afd87926 ContainerID="e2c2d61dd0397f0e7710d72bf650796a7b014ab398856931cd357d7efe3c8d92" Namespace="kube-system" Pod="coredns-7db6d8ff4d-srzmr" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--srzmr-eth0" May 10 00:07:27.471011 containerd[1455]: 2025-05-10 00:07:27.454 [INFO][4345] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e2c2d61dd0397f0e7710d72bf650796a7b014ab398856931cd357d7efe3c8d92" Namespace="kube-system" Pod="coredns-7db6d8ff4d-srzmr" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--srzmr-eth0" May 10 00:07:27.471011 containerd[1455]: 2025-05-10 00:07:27.455 [INFO][4345] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="e2c2d61dd0397f0e7710d72bf650796a7b014ab398856931cd357d7efe3c8d92" Namespace="kube-system" Pod="coredns-7db6d8ff4d-srzmr" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--srzmr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--srzmr-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"5487806e-6495-4d4f-a191-df1e4f5aa0a8", ResourceVersion:"726", Generation:0, CreationTimestamp:time.Date(2025, time.May, 10, 0, 7, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e2c2d61dd0397f0e7710d72bf650796a7b014ab398856931cd357d7efe3c8d92", Pod:"coredns-7db6d8ff4d-srzmr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliab3afd87926", MAC:"da:c2:03:a1:6b:f4", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 00:07:27.471011 containerd[1455]: 2025-05-10 00:07:27.467 [INFO][4345] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="e2c2d61dd0397f0e7710d72bf650796a7b014ab398856931cd357d7efe3c8d92" Namespace="kube-system" Pod="coredns-7db6d8ff4d-srzmr" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--srzmr-eth0" May 10 00:07:27.488783 systemd-networkd[1386]: cali10222e0e31c: Link UP May 10 00:07:27.489188 systemd-networkd[1386]: cali10222e0e31c: Gained carrier May 10 00:07:27.505918 containerd[1455]: 2025-05-10 00:07:27.000 [INFO][4316] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 10 00:07:27.505918 containerd[1455]: 2025-05-10 00:07:27.139 [INFO][4316] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--5c95969b9--5mpjw-eth0 calico-kube-controllers-5c95969b9- calico-system 2aab4214-d322-46c1-9e38-01e24fa563db 724 0 2025-05-10 00:07:15 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5c95969b9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-5c95969b9-5mpjw eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali10222e0e31c [] []}} ContainerID="cdc558cf6ea79ecc171c37ff2a500cad51014f5c071f06f38e3268738b6ceed1" Namespace="calico-system" Pod="calico-kube-controllers-5c95969b9-5mpjw" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5c95969b9--5mpjw-" May 10 00:07:27.505918 containerd[1455]: 2025-05-10 00:07:27.139 [INFO][4316] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="cdc558cf6ea79ecc171c37ff2a500cad51014f5c071f06f38e3268738b6ceed1" Namespace="calico-system" Pod="calico-kube-controllers-5c95969b9-5mpjw" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5c95969b9--5mpjw-eth0" May 10 00:07:27.505918 containerd[1455]: 2025-05-10 00:07:27.382 [INFO][4440] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cdc558cf6ea79ecc171c37ff2a500cad51014f5c071f06f38e3268738b6ceed1" HandleID="k8s-pod-network.cdc558cf6ea79ecc171c37ff2a500cad51014f5c071f06f38e3268738b6ceed1" Workload="localhost-k8s-calico--kube--controllers--5c95969b9--5mpjw-eth0" May 10 00:07:27.505918 containerd[1455]: 2025-05-10 00:07:27.399 [INFO][4440] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="cdc558cf6ea79ecc171c37ff2a500cad51014f5c071f06f38e3268738b6ceed1" HandleID="k8s-pod-network.cdc558cf6ea79ecc171c37ff2a500cad51014f5c071f06f38e3268738b6ceed1" Workload="localhost-k8s-calico--kube--controllers--5c95969b9--5mpjw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003429a0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-5c95969b9-5mpjw", "timestamp":"2025-05-10 00:07:27.382797632 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 10 00:07:27.505918 containerd[1455]: 2025-05-10 00:07:27.399 [INFO][4440] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 00:07:27.505918 containerd[1455]: 2025-05-10 00:07:27.437 [INFO][4440] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 00:07:27.505918 containerd[1455]: 2025-05-10 00:07:27.437 [INFO][4440] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 10 00:07:27.505918 containerd[1455]: 2025-05-10 00:07:27.441 [INFO][4440] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.cdc558cf6ea79ecc171c37ff2a500cad51014f5c071f06f38e3268738b6ceed1" host="localhost" May 10 00:07:27.505918 containerd[1455]: 2025-05-10 00:07:27.447 [INFO][4440] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 10 00:07:27.505918 containerd[1455]: 2025-05-10 00:07:27.460 [INFO][4440] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 10 00:07:27.505918 containerd[1455]: 2025-05-10 00:07:27.466 [INFO][4440] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 10 00:07:27.505918 containerd[1455]: 2025-05-10 00:07:27.469 [INFO][4440] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 10 00:07:27.505918 containerd[1455]: 2025-05-10 00:07:27.469 [INFO][4440] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.cdc558cf6ea79ecc171c37ff2a500cad51014f5c071f06f38e3268738b6ceed1" host="localhost" May 10 00:07:27.505918 containerd[1455]: 2025-05-10 00:07:27.472 [INFO][4440] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.cdc558cf6ea79ecc171c37ff2a500cad51014f5c071f06f38e3268738b6ceed1 May 10 00:07:27.505918 containerd[1455]: 2025-05-10 00:07:27.476 [INFO][4440] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.cdc558cf6ea79ecc171c37ff2a500cad51014f5c071f06f38e3268738b6ceed1" host="localhost" May 10 00:07:27.505918 containerd[1455]: 2025-05-10 00:07:27.482 [INFO][4440] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.cdc558cf6ea79ecc171c37ff2a500cad51014f5c071f06f38e3268738b6ceed1" host="localhost" May 10 00:07:27.505918 containerd[1455]: 2025-05-10 00:07:27.482 [INFO][4440] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.cdc558cf6ea79ecc171c37ff2a500cad51014f5c071f06f38e3268738b6ceed1" host="localhost" May 10 00:07:27.505918 containerd[1455]: 2025-05-10 00:07:27.482 [INFO][4440] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 00:07:27.505918 containerd[1455]: 2025-05-10 00:07:27.482 [INFO][4440] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="cdc558cf6ea79ecc171c37ff2a500cad51014f5c071f06f38e3268738b6ceed1" HandleID="k8s-pod-network.cdc558cf6ea79ecc171c37ff2a500cad51014f5c071f06f38e3268738b6ceed1" Workload="localhost-k8s-calico--kube--controllers--5c95969b9--5mpjw-eth0" May 10 00:07:27.506574 containerd[1455]: 2025-05-10 00:07:27.485 [INFO][4316] cni-plugin/k8s.go 386: Populated endpoint ContainerID="cdc558cf6ea79ecc171c37ff2a500cad51014f5c071f06f38e3268738b6ceed1" Namespace="calico-system" Pod="calico-kube-controllers-5c95969b9-5mpjw" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5c95969b9--5mpjw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5c95969b9--5mpjw-eth0", GenerateName:"calico-kube-controllers-5c95969b9-", Namespace:"calico-system", SelfLink:"", UID:"2aab4214-d322-46c1-9e38-01e24fa563db", ResourceVersion:"724", Generation:0, CreationTimestamp:time.Date(2025, time.May, 10, 0, 7, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5c95969b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-5c95969b9-5mpjw", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali10222e0e31c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 00:07:27.506574 containerd[1455]: 2025-05-10 00:07:27.485 [INFO][4316] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="cdc558cf6ea79ecc171c37ff2a500cad51014f5c071f06f38e3268738b6ceed1" Namespace="calico-system" Pod="calico-kube-controllers-5c95969b9-5mpjw" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5c95969b9--5mpjw-eth0" May 10 00:07:27.506574 containerd[1455]: 2025-05-10 00:07:27.485 [INFO][4316] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali10222e0e31c ContainerID="cdc558cf6ea79ecc171c37ff2a500cad51014f5c071f06f38e3268738b6ceed1" Namespace="calico-system" Pod="calico-kube-controllers-5c95969b9-5mpjw" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5c95969b9--5mpjw-eth0" May 10 00:07:27.506574 containerd[1455]: 2025-05-10 00:07:27.489 [INFO][4316] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cdc558cf6ea79ecc171c37ff2a500cad51014f5c071f06f38e3268738b6ceed1" Namespace="calico-system" Pod="calico-kube-controllers-5c95969b9-5mpjw" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5c95969b9--5mpjw-eth0" May 10 00:07:27.506574 containerd[1455]: 2025-05-10 00:07:27.489 [INFO][4316] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="cdc558cf6ea79ecc171c37ff2a500cad51014f5c071f06f38e3268738b6ceed1" Namespace="calico-system" Pod="calico-kube-controllers-5c95969b9-5mpjw" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5c95969b9--5mpjw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5c95969b9--5mpjw-eth0", GenerateName:"calico-kube-controllers-5c95969b9-", Namespace:"calico-system", SelfLink:"", UID:"2aab4214-d322-46c1-9e38-01e24fa563db", ResourceVersion:"724", Generation:0, CreationTimestamp:time.Date(2025, time.May, 10, 0, 7, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5c95969b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"cdc558cf6ea79ecc171c37ff2a500cad51014f5c071f06f38e3268738b6ceed1", Pod:"calico-kube-controllers-5c95969b9-5mpjw", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali10222e0e31c", MAC:"12:b6:f7:89:e6:d9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 00:07:27.506574 containerd[1455]: 2025-05-10 00:07:27.501 [INFO][4316] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="cdc558cf6ea79ecc171c37ff2a500cad51014f5c071f06f38e3268738b6ceed1" Namespace="calico-system" Pod="calico-kube-controllers-5c95969b9-5mpjw" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5c95969b9--5mpjw-eth0" May 10 00:07:27.510984 containerd[1455]: time="2025-05-10T00:07:27.510896164Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 00:07:27.510984 containerd[1455]: time="2025-05-10T00:07:27.510968525Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 00:07:27.511099 containerd[1455]: time="2025-05-10T00:07:27.510992885Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:07:27.519876 containerd[1455]: time="2025-05-10T00:07:27.519694321Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:07:27.535538 containerd[1455]: time="2025-05-10T00:07:27.535384362Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 00:07:27.535538 containerd[1455]: time="2025-05-10T00:07:27.535486403Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 00:07:27.536454 containerd[1455]: time="2025-05-10T00:07:27.535985212Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:07:27.536454 containerd[1455]: time="2025-05-10T00:07:27.536135255Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:07:27.537459 systemd-networkd[1386]: cali400611439f1: Link UP May 10 00:07:27.538412 systemd-networkd[1386]: cali400611439f1: Gained carrier May 10 00:07:27.562270 containerd[1455]: 2025-05-10 00:07:27.039 [INFO][4332] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 10 00:07:27.562270 containerd[1455]: 2025-05-10 00:07:27.134 [INFO][4332] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--6hd28-eth0 csi-node-driver- calico-system 56542ad7-7f48-4051-ae36-d7536ab16d6e 615 0 2025-05-10 00:07:15 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:55b7b4b9d k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-6hd28 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali400611439f1 [] []}} ContainerID="2a3cf9f6b40683112821745fe63014ddd0a7f3abd8295708d9063fe3c3a13ff4" Namespace="calico-system" Pod="csi-node-driver-6hd28" WorkloadEndpoint="localhost-k8s-csi--node--driver--6hd28-" May 10 00:07:27.562270 containerd[1455]: 2025-05-10 00:07:27.135 [INFO][4332] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="2a3cf9f6b40683112821745fe63014ddd0a7f3abd8295708d9063fe3c3a13ff4" Namespace="calico-system" Pod="csi-node-driver-6hd28" WorkloadEndpoint="localhost-k8s-csi--node--driver--6hd28-eth0" May 10 00:07:27.562270 containerd[1455]: 2025-05-10 00:07:27.382 [INFO][4420] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2a3cf9f6b40683112821745fe63014ddd0a7f3abd8295708d9063fe3c3a13ff4" HandleID="k8s-pod-network.2a3cf9f6b40683112821745fe63014ddd0a7f3abd8295708d9063fe3c3a13ff4" Workload="localhost-k8s-csi--node--driver--6hd28-eth0" May 10 00:07:27.562270 containerd[1455]: 2025-05-10 00:07:27.400 [INFO][4420] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2a3cf9f6b40683112821745fe63014ddd0a7f3abd8295708d9063fe3c3a13ff4" HandleID="k8s-pod-network.2a3cf9f6b40683112821745fe63014ddd0a7f3abd8295708d9063fe3c3a13ff4" Workload="localhost-k8s-csi--node--driver--6hd28-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400029cd30), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-6hd28", "timestamp":"2025-05-10 00:07:27.382823553 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 10 00:07:27.562270 containerd[1455]: 2025-05-10 00:07:27.400 [INFO][4420] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 00:07:27.562270 containerd[1455]: 2025-05-10 00:07:27.482 [INFO][4420] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 00:07:27.562270 containerd[1455]: 2025-05-10 00:07:27.482 [INFO][4420] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 10 00:07:27.562270 containerd[1455]: 2025-05-10 00:07:27.485 [INFO][4420] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.2a3cf9f6b40683112821745fe63014ddd0a7f3abd8295708d9063fe3c3a13ff4" host="localhost" May 10 00:07:27.562270 containerd[1455]: 2025-05-10 00:07:27.493 [INFO][4420] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 10 00:07:27.562270 containerd[1455]: 2025-05-10 00:07:27.503 [INFO][4420] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 10 00:07:27.562270 containerd[1455]: 2025-05-10 00:07:27.507 [INFO][4420] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 10 00:07:27.562270 containerd[1455]: 2025-05-10 00:07:27.511 [INFO][4420] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 10 00:07:27.562270 containerd[1455]: 2025-05-10 00:07:27.512 [INFO][4420] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2a3cf9f6b40683112821745fe63014ddd0a7f3abd8295708d9063fe3c3a13ff4" host="localhost" May 10 00:07:27.562270 containerd[1455]: 2025-05-10 00:07:27.514 [INFO][4420] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.2a3cf9f6b40683112821745fe63014ddd0a7f3abd8295708d9063fe3c3a13ff4 May 10 00:07:27.562270 containerd[1455]: 2025-05-10 00:07:27.521 [INFO][4420] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2a3cf9f6b40683112821745fe63014ddd0a7f3abd8295708d9063fe3c3a13ff4" host="localhost" May 10 00:07:27.562270 containerd[1455]: 2025-05-10 00:07:27.528 [INFO][4420] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.2a3cf9f6b40683112821745fe63014ddd0a7f3abd8295708d9063fe3c3a13ff4" host="localhost" May 10 00:07:27.562270 containerd[1455]: 2025-05-10 00:07:27.528 [INFO][4420] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.2a3cf9f6b40683112821745fe63014ddd0a7f3abd8295708d9063fe3c3a13ff4" host="localhost" May 10 00:07:27.562270 containerd[1455]: 2025-05-10 00:07:27.528 [INFO][4420] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 00:07:27.562270 containerd[1455]: 2025-05-10 00:07:27.528 [INFO][4420] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="2a3cf9f6b40683112821745fe63014ddd0a7f3abd8295708d9063fe3c3a13ff4" HandleID="k8s-pod-network.2a3cf9f6b40683112821745fe63014ddd0a7f3abd8295708d9063fe3c3a13ff4" Workload="localhost-k8s-csi--node--driver--6hd28-eth0" May 10 00:07:27.562068 systemd[1]: Started cri-containerd-e2c2d61dd0397f0e7710d72bf650796a7b014ab398856931cd357d7efe3c8d92.scope - libcontainer container e2c2d61dd0397f0e7710d72bf650796a7b014ab398856931cd357d7efe3c8d92. May 10 00:07:27.564111 containerd[1455]: 2025-05-10 00:07:27.534 [INFO][4332] cni-plugin/k8s.go 386: Populated endpoint ContainerID="2a3cf9f6b40683112821745fe63014ddd0a7f3abd8295708d9063fe3c3a13ff4" Namespace="calico-system" Pod="csi-node-driver-6hd28" WorkloadEndpoint="localhost-k8s-csi--node--driver--6hd28-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--6hd28-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"56542ad7-7f48-4051-ae36-d7536ab16d6e", ResourceVersion:"615", Generation:0, CreationTimestamp:time.Date(2025, time.May, 10, 0, 7, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-6hd28", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali400611439f1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 00:07:27.564111 containerd[1455]: 2025-05-10 00:07:27.534 [INFO][4332] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="2a3cf9f6b40683112821745fe63014ddd0a7f3abd8295708d9063fe3c3a13ff4" Namespace="calico-system" Pod="csi-node-driver-6hd28" WorkloadEndpoint="localhost-k8s-csi--node--driver--6hd28-eth0" May 10 00:07:27.564111 containerd[1455]: 2025-05-10 00:07:27.534 [INFO][4332] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali400611439f1 ContainerID="2a3cf9f6b40683112821745fe63014ddd0a7f3abd8295708d9063fe3c3a13ff4" Namespace="calico-system" Pod="csi-node-driver-6hd28" WorkloadEndpoint="localhost-k8s-csi--node--driver--6hd28-eth0" May 10 00:07:27.564111 containerd[1455]: 2025-05-10 00:07:27.539 [INFO][4332] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2a3cf9f6b40683112821745fe63014ddd0a7f3abd8295708d9063fe3c3a13ff4" Namespace="calico-system" Pod="csi-node-driver-6hd28" WorkloadEndpoint="localhost-k8s-csi--node--driver--6hd28-eth0" May 10 00:07:27.564111 containerd[1455]: 2025-05-10 00:07:27.539 [INFO][4332] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="2a3cf9f6b40683112821745fe63014ddd0a7f3abd8295708d9063fe3c3a13ff4" Namespace="calico-system" Pod="csi-node-driver-6hd28" WorkloadEndpoint="localhost-k8s-csi--node--driver--6hd28-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--6hd28-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"56542ad7-7f48-4051-ae36-d7536ab16d6e", ResourceVersion:"615", Generation:0, CreationTimestamp:time.Date(2025, time.May, 10, 0, 7, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2a3cf9f6b40683112821745fe63014ddd0a7f3abd8295708d9063fe3c3a13ff4", Pod:"csi-node-driver-6hd28", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali400611439f1", MAC:"fe:66:fc:d8:b2:c4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 00:07:27.564111 containerd[1455]: 2025-05-10 00:07:27.556 [INFO][4332] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="2a3cf9f6b40683112821745fe63014ddd0a7f3abd8295708d9063fe3c3a13ff4" Namespace="calico-system" Pod="csi-node-driver-6hd28" WorkloadEndpoint="localhost-k8s-csi--node--driver--6hd28-eth0" May 10 00:07:27.566877 systemd[1]: Started cri-containerd-cdc558cf6ea79ecc171c37ff2a500cad51014f5c071f06f38e3268738b6ceed1.scope - libcontainer container cdc558cf6ea79ecc171c37ff2a500cad51014f5c071f06f38e3268738b6ceed1. May 10 00:07:27.585032 systemd-resolved[1322]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 10 00:07:27.587631 systemd-resolved[1322]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 10 00:07:27.592648 containerd[1455]: time="2025-05-10T00:07:27.592524064Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 00:07:27.601593 systemd-networkd[1386]: calicb916e63fed: Link UP May 10 00:07:27.601980 systemd-networkd[1386]: calicb916e63fed: Gained carrier May 10 00:07:27.623759 containerd[1455]: time="2025-05-10T00:07:27.622536680Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-srzmr,Uid:5487806e-6495-4d4f-a191-df1e4f5aa0a8,Namespace:kube-system,Attempt:4,} returns sandbox id \"e2c2d61dd0397f0e7710d72bf650796a7b014ab398856931cd357d7efe3c8d92\"" May 10 00:07:27.624702 kubelet[2619]: E0510 00:07:27.623945 2619 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:07:27.628893 containerd[1455]: time="2025-05-10T00:07:27.627953417Z" level=info msg="CreateContainer within sandbox \"e2c2d61dd0397f0e7710d72bf650796a7b014ab398856931cd357d7efe3c8d92\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 10 00:07:27.636481 containerd[1455]: time="2025-05-10T00:07:27.592618105Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 00:07:27.636481 containerd[1455]: time="2025-05-10T00:07:27.635933840Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:07:27.636481 containerd[1455]: time="2025-05-10T00:07:27.636123323Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:07:27.636987 containerd[1455]: 2025-05-10 00:07:27.058 [INFO][4361] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 10 00:07:27.636987 containerd[1455]: 2025-05-10 00:07:27.138 [INFO][4361] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7c5c466cb8--82f6f-eth0 calico-apiserver-7c5c466cb8- calico-apiserver 01ce480b-3a6d-4fd5-af7a-73b802892ab1 721 0 2025-05-10 00:07:15 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7c5c466cb8 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7c5c466cb8-82f6f eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calicb916e63fed [] []}} ContainerID="36c0aa707cb5c141f99df7bfcb85160c31780fa82d4f7e405685d2caedabac07" Namespace="calico-apiserver" Pod="calico-apiserver-7c5c466cb8-82f6f" WorkloadEndpoint="localhost-k8s-calico--apiserver--7c5c466cb8--82f6f-" May 10 00:07:27.636987 containerd[1455]: 2025-05-10 00:07:27.139 [INFO][4361] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="36c0aa707cb5c141f99df7bfcb85160c31780fa82d4f7e405685d2caedabac07" Namespace="calico-apiserver" Pod="calico-apiserver-7c5c466cb8-82f6f" WorkloadEndpoint="localhost-k8s-calico--apiserver--7c5c466cb8--82f6f-eth0" May 10 00:07:27.636987 containerd[1455]: 2025-05-10 00:07:27.388 [INFO][4436] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="36c0aa707cb5c141f99df7bfcb85160c31780fa82d4f7e405685d2caedabac07" HandleID="k8s-pod-network.36c0aa707cb5c141f99df7bfcb85160c31780fa82d4f7e405685d2caedabac07" Workload="localhost-k8s-calico--apiserver--7c5c466cb8--82f6f-eth0" May 10 00:07:27.636987 containerd[1455]: 2025-05-10 00:07:27.406 [INFO][4436] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="36c0aa707cb5c141f99df7bfcb85160c31780fa82d4f7e405685d2caedabac07" HandleID="k8s-pod-network.36c0aa707cb5c141f99df7bfcb85160c31780fa82d4f7e405685d2caedabac07" Workload="localhost-k8s-calico--apiserver--7c5c466cb8--82f6f-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000424040), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7c5c466cb8-82f6f", "timestamp":"2025-05-10 00:07:27.387990805 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 10 00:07:27.636987 containerd[1455]: 2025-05-10 00:07:27.406 [INFO][4436] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 00:07:27.636987 containerd[1455]: 2025-05-10 00:07:27.528 [INFO][4436] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 00:07:27.636987 containerd[1455]: 2025-05-10 00:07:27.529 [INFO][4436] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 10 00:07:27.636987 containerd[1455]: 2025-05-10 00:07:27.532 [INFO][4436] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.36c0aa707cb5c141f99df7bfcb85160c31780fa82d4f7e405685d2caedabac07" host="localhost" May 10 00:07:27.636987 containerd[1455]: 2025-05-10 00:07:27.543 [INFO][4436] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 10 00:07:27.636987 containerd[1455]: 2025-05-10 00:07:27.558 [INFO][4436] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 10 00:07:27.636987 containerd[1455]: 2025-05-10 00:07:27.562 [INFO][4436] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 10 00:07:27.636987 containerd[1455]: 2025-05-10 00:07:27.570 [INFO][4436] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 10 00:07:27.636987 containerd[1455]: 2025-05-10 00:07:27.570 [INFO][4436] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.36c0aa707cb5c141f99df7bfcb85160c31780fa82d4f7e405685d2caedabac07" host="localhost" May 10 00:07:27.636987 containerd[1455]: 2025-05-10 00:07:27.578 [INFO][4436] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.36c0aa707cb5c141f99df7bfcb85160c31780fa82d4f7e405685d2caedabac07 May 10 00:07:27.636987 containerd[1455]: 2025-05-10 00:07:27.583 [INFO][4436] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.36c0aa707cb5c141f99df7bfcb85160c31780fa82d4f7e405685d2caedabac07" host="localhost" May 10 00:07:27.636987 containerd[1455]: 2025-05-10 00:07:27.594 [INFO][4436] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.36c0aa707cb5c141f99df7bfcb85160c31780fa82d4f7e405685d2caedabac07" host="localhost" May 10 00:07:27.636987 containerd[1455]: 2025-05-10 00:07:27.594 [INFO][4436] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.36c0aa707cb5c141f99df7bfcb85160c31780fa82d4f7e405685d2caedabac07" host="localhost" May 10 00:07:27.636987 containerd[1455]: 2025-05-10 00:07:27.594 [INFO][4436] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 00:07:27.636987 containerd[1455]: 2025-05-10 00:07:27.594 [INFO][4436] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="36c0aa707cb5c141f99df7bfcb85160c31780fa82d4f7e405685d2caedabac07" HandleID="k8s-pod-network.36c0aa707cb5c141f99df7bfcb85160c31780fa82d4f7e405685d2caedabac07" Workload="localhost-k8s-calico--apiserver--7c5c466cb8--82f6f-eth0" May 10 00:07:27.637466 containerd[1455]: 2025-05-10 00:07:27.596 [INFO][4361] cni-plugin/k8s.go 386: Populated endpoint ContainerID="36c0aa707cb5c141f99df7bfcb85160c31780fa82d4f7e405685d2caedabac07" Namespace="calico-apiserver" Pod="calico-apiserver-7c5c466cb8-82f6f" WorkloadEndpoint="localhost-k8s-calico--apiserver--7c5c466cb8--82f6f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7c5c466cb8--82f6f-eth0", GenerateName:"calico-apiserver-7c5c466cb8-", Namespace:"calico-apiserver", SelfLink:"", UID:"01ce480b-3a6d-4fd5-af7a-73b802892ab1", ResourceVersion:"721", Generation:0, CreationTimestamp:time.Date(2025, time.May, 10, 0, 7, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7c5c466cb8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7c5c466cb8-82f6f", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calicb916e63fed", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 00:07:27.637466 containerd[1455]: 2025-05-10 00:07:27.596 [INFO][4361] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="36c0aa707cb5c141f99df7bfcb85160c31780fa82d4f7e405685d2caedabac07" Namespace="calico-apiserver" Pod="calico-apiserver-7c5c466cb8-82f6f" WorkloadEndpoint="localhost-k8s-calico--apiserver--7c5c466cb8--82f6f-eth0" May 10 00:07:27.637466 containerd[1455]: 2025-05-10 00:07:27.596 [INFO][4361] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicb916e63fed ContainerID="36c0aa707cb5c141f99df7bfcb85160c31780fa82d4f7e405685d2caedabac07" Namespace="calico-apiserver" Pod="calico-apiserver-7c5c466cb8-82f6f" WorkloadEndpoint="localhost-k8s-calico--apiserver--7c5c466cb8--82f6f-eth0" May 10 00:07:27.637466 containerd[1455]: 2025-05-10 00:07:27.601 [INFO][4361] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="36c0aa707cb5c141f99df7bfcb85160c31780fa82d4f7e405685d2caedabac07" Namespace="calico-apiserver" Pod="calico-apiserver-7c5c466cb8-82f6f" WorkloadEndpoint="localhost-k8s-calico--apiserver--7c5c466cb8--82f6f-eth0" May 10 00:07:27.637466 containerd[1455]: 2025-05-10 00:07:27.602 [INFO][4361] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="36c0aa707cb5c141f99df7bfcb85160c31780fa82d4f7e405685d2caedabac07" Namespace="calico-apiserver" Pod="calico-apiserver-7c5c466cb8-82f6f" WorkloadEndpoint="localhost-k8s-calico--apiserver--7c5c466cb8--82f6f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7c5c466cb8--82f6f-eth0", GenerateName:"calico-apiserver-7c5c466cb8-", Namespace:"calico-apiserver", SelfLink:"", UID:"01ce480b-3a6d-4fd5-af7a-73b802892ab1", ResourceVersion:"721", Generation:0, CreationTimestamp:time.Date(2025, time.May, 10, 0, 7, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7c5c466cb8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"36c0aa707cb5c141f99df7bfcb85160c31780fa82d4f7e405685d2caedabac07", Pod:"calico-apiserver-7c5c466cb8-82f6f", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calicb916e63fed", MAC:"da:7a:a5:dd:63:4d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 00:07:27.637466 containerd[1455]: 2025-05-10 00:07:27.630 [INFO][4361] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="36c0aa707cb5c141f99df7bfcb85160c31780fa82d4f7e405685d2caedabac07" Namespace="calico-apiserver" Pod="calico-apiserver-7c5c466cb8-82f6f" WorkloadEndpoint="localhost-k8s-calico--apiserver--7c5c466cb8--82f6f-eth0" May 10 00:07:27.650156 containerd[1455]: time="2025-05-10T00:07:27.650114534Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5c95969b9-5mpjw,Uid:2aab4214-d322-46c1-9e38-01e24fa563db,Namespace:calico-system,Attempt:4,} returns sandbox id \"cdc558cf6ea79ecc171c37ff2a500cad51014f5c071f06f38e3268738b6ceed1\"" May 10 00:07:27.654056 containerd[1455]: time="2025-05-10T00:07:27.653796399Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\"" May 10 00:07:27.663208 containerd[1455]: time="2025-05-10T00:07:27.663038685Z" level=info msg="CreateContainer within sandbox \"e2c2d61dd0397f0e7710d72bf650796a7b014ab398856931cd357d7efe3c8d92\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"46d6131e5a19dc1bbe838363998170ca51d2dbc4bca6669ac6204f595955929c\"" May 10 00:07:27.663726 containerd[1455]: time="2025-05-10T00:07:27.663656176Z" level=info msg="StartContainer for \"46d6131e5a19dc1bbe838363998170ca51d2dbc4bca6669ac6204f595955929c\"" May 10 00:07:27.667332 systemd[1]: Started cri-containerd-2a3cf9f6b40683112821745fe63014ddd0a7f3abd8295708d9063fe3c3a13ff4.scope - libcontainer container 2a3cf9f6b40683112821745fe63014ddd0a7f3abd8295708d9063fe3c3a13ff4. May 10 00:07:27.674382 containerd[1455]: time="2025-05-10T00:07:27.674196804Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 00:07:27.674382 containerd[1455]: time="2025-05-10T00:07:27.674266006Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 00:07:27.674382 containerd[1455]: time="2025-05-10T00:07:27.674277526Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:07:27.674568 containerd[1455]: time="2025-05-10T00:07:27.674454369Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:07:27.676889 systemd-networkd[1386]: cali34b8ccad11a: Link UP May 10 00:07:27.677227 systemd-networkd[1386]: cali34b8ccad11a: Gained carrier May 10 00:07:27.694035 containerd[1455]: 2025-05-10 00:07:27.067 [INFO][4398] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 10 00:07:27.694035 containerd[1455]: 2025-05-10 00:07:27.136 [INFO][4398] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7db6d8ff4d--kflv6-eth0 coredns-7db6d8ff4d- kube-system c06d69b8-4f38-4476-a0a9-074ed47a6924 718 0 2025-05-10 00:07:09 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7db6d8ff4d-kflv6 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali34b8ccad11a [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="f95fc3395d621487392c567baea695acf86770e81514e75f44d8f1fa4133cd0a" Namespace="kube-system" Pod="coredns-7db6d8ff4d-kflv6" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--kflv6-" May 10 00:07:27.694035 containerd[1455]: 2025-05-10 00:07:27.136 [INFO][4398] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="f95fc3395d621487392c567baea695acf86770e81514e75f44d8f1fa4133cd0a" Namespace="kube-system" Pod="coredns-7db6d8ff4d-kflv6" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--kflv6-eth0" May 10 00:07:27.694035 containerd[1455]: 2025-05-10 00:07:27.382 [INFO][4422] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f95fc3395d621487392c567baea695acf86770e81514e75f44d8f1fa4133cd0a" HandleID="k8s-pod-network.f95fc3395d621487392c567baea695acf86770e81514e75f44d8f1fa4133cd0a" Workload="localhost-k8s-coredns--7db6d8ff4d--kflv6-eth0" May 10 00:07:27.694035 containerd[1455]: 2025-05-10 00:07:27.410 [INFO][4422] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f95fc3395d621487392c567baea695acf86770e81514e75f44d8f1fa4133cd0a" HandleID="k8s-pod-network.f95fc3395d621487392c567baea695acf86770e81514e75f44d8f1fa4133cd0a" Workload="localhost-k8s-coredns--7db6d8ff4d--kflv6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40004e6b80), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7db6d8ff4d-kflv6", "timestamp":"2025-05-10 00:07:27.382801393 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 10 00:07:27.694035 containerd[1455]: 2025-05-10 00:07:27.410 [INFO][4422] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 00:07:27.694035 containerd[1455]: 2025-05-10 00:07:27.596 [INFO][4422] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 00:07:27.694035 containerd[1455]: 2025-05-10 00:07:27.596 [INFO][4422] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 10 00:07:27.694035 containerd[1455]: 2025-05-10 00:07:27.601 [INFO][4422] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.f95fc3395d621487392c567baea695acf86770e81514e75f44d8f1fa4133cd0a" host="localhost" May 10 00:07:27.694035 containerd[1455]: 2025-05-10 00:07:27.622 [INFO][4422] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 10 00:07:27.694035 containerd[1455]: 2025-05-10 00:07:27.641 [INFO][4422] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 10 00:07:27.694035 containerd[1455]: 2025-05-10 00:07:27.644 [INFO][4422] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 10 00:07:27.694035 containerd[1455]: 2025-05-10 00:07:27.647 [INFO][4422] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 10 00:07:27.694035 containerd[1455]: 2025-05-10 00:07:27.647 [INFO][4422] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f95fc3395d621487392c567baea695acf86770e81514e75f44d8f1fa4133cd0a" host="localhost" May 10 00:07:27.694035 containerd[1455]: 2025-05-10 00:07:27.648 [INFO][4422] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.f95fc3395d621487392c567baea695acf86770e81514e75f44d8f1fa4133cd0a May 10 00:07:27.694035 containerd[1455]: 2025-05-10 00:07:27.657 [INFO][4422] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f95fc3395d621487392c567baea695acf86770e81514e75f44d8f1fa4133cd0a" host="localhost" May 10 00:07:27.694035 containerd[1455]: 2025-05-10 00:07:27.668 [INFO][4422] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.f95fc3395d621487392c567baea695acf86770e81514e75f44d8f1fa4133cd0a" host="localhost" May 10 00:07:27.694035 containerd[1455]: 2025-05-10 00:07:27.668 [INFO][4422] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.f95fc3395d621487392c567baea695acf86770e81514e75f44d8f1fa4133cd0a" host="localhost" May 10 00:07:27.694035 containerd[1455]: 2025-05-10 00:07:27.668 [INFO][4422] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 00:07:27.694035 containerd[1455]: 2025-05-10 00:07:27.668 [INFO][4422] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="f95fc3395d621487392c567baea695acf86770e81514e75f44d8f1fa4133cd0a" HandleID="k8s-pod-network.f95fc3395d621487392c567baea695acf86770e81514e75f44d8f1fa4133cd0a" Workload="localhost-k8s-coredns--7db6d8ff4d--kflv6-eth0" May 10 00:07:27.694616 containerd[1455]: 2025-05-10 00:07:27.675 [INFO][4398] cni-plugin/k8s.go 386: Populated endpoint ContainerID="f95fc3395d621487392c567baea695acf86770e81514e75f44d8f1fa4133cd0a" Namespace="kube-system" Pod="coredns-7db6d8ff4d-kflv6" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--kflv6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--kflv6-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"c06d69b8-4f38-4476-a0a9-074ed47a6924", ResourceVersion:"718", Generation:0, CreationTimestamp:time.Date(2025, time.May, 10, 0, 7, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7db6d8ff4d-kflv6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali34b8ccad11a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 00:07:27.694616 containerd[1455]: 2025-05-10 00:07:27.675 [INFO][4398] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="f95fc3395d621487392c567baea695acf86770e81514e75f44d8f1fa4133cd0a" Namespace="kube-system" Pod="coredns-7db6d8ff4d-kflv6" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--kflv6-eth0" May 10 00:07:27.694616 containerd[1455]: 2025-05-10 00:07:27.675 [INFO][4398] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali34b8ccad11a ContainerID="f95fc3395d621487392c567baea695acf86770e81514e75f44d8f1fa4133cd0a" Namespace="kube-system" Pod="coredns-7db6d8ff4d-kflv6" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--kflv6-eth0" May 10 00:07:27.694616 containerd[1455]: 2025-05-10 00:07:27.676 [INFO][4398] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f95fc3395d621487392c567baea695acf86770e81514e75f44d8f1fa4133cd0a" Namespace="kube-system" Pod="coredns-7db6d8ff4d-kflv6" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--kflv6-eth0" May 10 00:07:27.694616 containerd[1455]: 2025-05-10 00:07:27.677 [INFO][4398] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="f95fc3395d621487392c567baea695acf86770e81514e75f44d8f1fa4133cd0a" Namespace="kube-system" Pod="coredns-7db6d8ff4d-kflv6" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--kflv6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--kflv6-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"c06d69b8-4f38-4476-a0a9-074ed47a6924", ResourceVersion:"718", Generation:0, CreationTimestamp:time.Date(2025, time.May, 10, 0, 7, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f95fc3395d621487392c567baea695acf86770e81514e75f44d8f1fa4133cd0a", Pod:"coredns-7db6d8ff4d-kflv6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali34b8ccad11a", MAC:"8a:f6:13:4f:de:15", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 00:07:27.694616 containerd[1455]: 2025-05-10 00:07:27.689 [INFO][4398] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="f95fc3395d621487392c567baea695acf86770e81514e75f44d8f1fa4133cd0a" Namespace="kube-system" Pod="coredns-7db6d8ff4d-kflv6" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--kflv6-eth0" May 10 00:07:27.708233 systemd[1]: run-netns-cni\x2d2b8d88dc\x2da56e\x2d303d\x2de51c\x2dbc00263ebc2f.mount: Deactivated successfully. May 10 00:07:27.708391 systemd[1]: run-netns-cni\x2dd06fdc5e\x2de41a\x2d9c78\x2d32b7\x2db38277599c98.mount: Deactivated successfully. May 10 00:07:27.727981 containerd[1455]: time="2025-05-10T00:07:27.727832044Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 00:07:27.727981 containerd[1455]: time="2025-05-10T00:07:27.727913085Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 00:07:27.727981 containerd[1455]: time="2025-05-10T00:07:27.727949806Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:07:27.728200 containerd[1455]: time="2025-05-10T00:07:27.728100128Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:07:27.729124 systemd[1]: Started cri-containerd-36c0aa707cb5c141f99df7bfcb85160c31780fa82d4f7e405685d2caedabac07.scope - libcontainer container 36c0aa707cb5c141f99df7bfcb85160c31780fa82d4f7e405685d2caedabac07. May 10 00:07:27.730167 systemd[1]: Started cri-containerd-46d6131e5a19dc1bbe838363998170ca51d2dbc4bca6669ac6204f595955929c.scope - libcontainer container 46d6131e5a19dc1bbe838363998170ca51d2dbc4bca6669ac6204f595955929c. May 10 00:07:27.731643 systemd-resolved[1322]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 10 00:07:27.750432 systemd-networkd[1386]: calif8866454fa3: Link UP May 10 00:07:27.752072 systemd-networkd[1386]: calif8866454fa3: Gained carrier May 10 00:07:27.755313 systemd[1]: Started sshd@8-10.0.0.141:22-10.0.0.1:34560.service - OpenSSH per-connection server daemon (10.0.0.1:34560). May 10 00:07:27.775238 systemd[1]: Started cri-containerd-f95fc3395d621487392c567baea695acf86770e81514e75f44d8f1fa4133cd0a.scope - libcontainer container f95fc3395d621487392c567baea695acf86770e81514e75f44d8f1fa4133cd0a. May 10 00:07:27.783819 containerd[1455]: time="2025-05-10T00:07:27.783777204Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6hd28,Uid:56542ad7-7f48-4051-ae36-d7536ab16d6e,Namespace:calico-system,Attempt:3,} returns sandbox id \"2a3cf9f6b40683112821745fe63014ddd0a7f3abd8295708d9063fe3c3a13ff4\"" May 10 00:07:27.787274 containerd[1455]: 2025-05-10 00:07:27.066 [INFO][4373] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 10 00:07:27.787274 containerd[1455]: 2025-05-10 00:07:27.135 [INFO][4373] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7c5c466cb8--7rrbw-eth0 calico-apiserver-7c5c466cb8- calico-apiserver 4adb856a-b358-4a75-afdb-0a2493e0d860 722 0 2025-05-10 00:07:15 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7c5c466cb8 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7c5c466cb8-7rrbw eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calif8866454fa3 [] []}} ContainerID="776d04603ba0b1e6a8de6dc0da0b18f0b9f2976a545ab917a4eebe8176a3aeac" Namespace="calico-apiserver" Pod="calico-apiserver-7c5c466cb8-7rrbw" WorkloadEndpoint="localhost-k8s-calico--apiserver--7c5c466cb8--7rrbw-" May 10 00:07:27.787274 containerd[1455]: 2025-05-10 00:07:27.135 [INFO][4373] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="776d04603ba0b1e6a8de6dc0da0b18f0b9f2976a545ab917a4eebe8176a3aeac" Namespace="calico-apiserver" Pod="calico-apiserver-7c5c466cb8-7rrbw" WorkloadEndpoint="localhost-k8s-calico--apiserver--7c5c466cb8--7rrbw-eth0" May 10 00:07:27.787274 containerd[1455]: 2025-05-10 00:07:27.388 [INFO][4425] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="776d04603ba0b1e6a8de6dc0da0b18f0b9f2976a545ab917a4eebe8176a3aeac" HandleID="k8s-pod-network.776d04603ba0b1e6a8de6dc0da0b18f0b9f2976a545ab917a4eebe8176a3aeac" Workload="localhost-k8s-calico--apiserver--7c5c466cb8--7rrbw-eth0" May 10 00:07:27.787274 containerd[1455]: 2025-05-10 00:07:27.411 [INFO][4425] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="776d04603ba0b1e6a8de6dc0da0b18f0b9f2976a545ab917a4eebe8176a3aeac" HandleID="k8s-pod-network.776d04603ba0b1e6a8de6dc0da0b18f0b9f2976a545ab917a4eebe8176a3aeac" Workload="localhost-k8s-calico--apiserver--7c5c466cb8--7rrbw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003cfb30), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7c5c466cb8-7rrbw", "timestamp":"2025-05-10 00:07:27.388599816 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 10 00:07:27.787274 containerd[1455]: 2025-05-10 00:07:27.411 [INFO][4425] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 00:07:27.787274 containerd[1455]: 2025-05-10 00:07:27.668 [INFO][4425] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 00:07:27.787274 containerd[1455]: 2025-05-10 00:07:27.668 [INFO][4425] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 10 00:07:27.787274 containerd[1455]: 2025-05-10 00:07:27.672 [INFO][4425] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.776d04603ba0b1e6a8de6dc0da0b18f0b9f2976a545ab917a4eebe8176a3aeac" host="localhost" May 10 00:07:27.787274 containerd[1455]: 2025-05-10 00:07:27.683 [INFO][4425] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 10 00:07:27.787274 containerd[1455]: 2025-05-10 00:07:27.703 [INFO][4425] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 10 00:07:27.787274 containerd[1455]: 2025-05-10 00:07:27.710 [INFO][4425] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 10 00:07:27.787274 containerd[1455]: 2025-05-10 00:07:27.714 [INFO][4425] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 10 00:07:27.787274 containerd[1455]: 2025-05-10 00:07:27.714 [INFO][4425] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.776d04603ba0b1e6a8de6dc0da0b18f0b9f2976a545ab917a4eebe8176a3aeac" host="localhost" May 10 00:07:27.787274 containerd[1455]: 2025-05-10 00:07:27.717 [INFO][4425] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.776d04603ba0b1e6a8de6dc0da0b18f0b9f2976a545ab917a4eebe8176a3aeac May 10 00:07:27.787274 containerd[1455]: 2025-05-10 00:07:27.722 [INFO][4425] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.776d04603ba0b1e6a8de6dc0da0b18f0b9f2976a545ab917a4eebe8176a3aeac" host="localhost" May 10 00:07:27.787274 containerd[1455]: 2025-05-10 00:07:27.734 [INFO][4425] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.776d04603ba0b1e6a8de6dc0da0b18f0b9f2976a545ab917a4eebe8176a3aeac" host="localhost" May 10 00:07:27.787274 containerd[1455]: 2025-05-10 00:07:27.734 [INFO][4425] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.776d04603ba0b1e6a8de6dc0da0b18f0b9f2976a545ab917a4eebe8176a3aeac" host="localhost" May 10 00:07:27.787274 containerd[1455]: 2025-05-10 00:07:27.734 [INFO][4425] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 00:07:27.787274 containerd[1455]: 2025-05-10 00:07:27.734 [INFO][4425] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="776d04603ba0b1e6a8de6dc0da0b18f0b9f2976a545ab917a4eebe8176a3aeac" HandleID="k8s-pod-network.776d04603ba0b1e6a8de6dc0da0b18f0b9f2976a545ab917a4eebe8176a3aeac" Workload="localhost-k8s-calico--apiserver--7c5c466cb8--7rrbw-eth0" May 10 00:07:27.788319 containerd[1455]: 2025-05-10 00:07:27.745 [INFO][4373] cni-plugin/k8s.go 386: Populated endpoint ContainerID="776d04603ba0b1e6a8de6dc0da0b18f0b9f2976a545ab917a4eebe8176a3aeac" Namespace="calico-apiserver" Pod="calico-apiserver-7c5c466cb8-7rrbw" WorkloadEndpoint="localhost-k8s-calico--apiserver--7c5c466cb8--7rrbw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7c5c466cb8--7rrbw-eth0", GenerateName:"calico-apiserver-7c5c466cb8-", Namespace:"calico-apiserver", SelfLink:"", UID:"4adb856a-b358-4a75-afdb-0a2493e0d860", ResourceVersion:"722", Generation:0, CreationTimestamp:time.Date(2025, time.May, 10, 0, 7, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7c5c466cb8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7c5c466cb8-7rrbw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif8866454fa3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 00:07:27.788319 containerd[1455]: 2025-05-10 00:07:27.745 [INFO][4373] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.134/32] ContainerID="776d04603ba0b1e6a8de6dc0da0b18f0b9f2976a545ab917a4eebe8176a3aeac" Namespace="calico-apiserver" Pod="calico-apiserver-7c5c466cb8-7rrbw" WorkloadEndpoint="localhost-k8s-calico--apiserver--7c5c466cb8--7rrbw-eth0" May 10 00:07:27.788319 containerd[1455]: 2025-05-10 00:07:27.745 [INFO][4373] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif8866454fa3 ContainerID="776d04603ba0b1e6a8de6dc0da0b18f0b9f2976a545ab917a4eebe8176a3aeac" Namespace="calico-apiserver" Pod="calico-apiserver-7c5c466cb8-7rrbw" WorkloadEndpoint="localhost-k8s-calico--apiserver--7c5c466cb8--7rrbw-eth0" May 10 00:07:27.788319 containerd[1455]: 2025-05-10 00:07:27.752 [INFO][4373] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="776d04603ba0b1e6a8de6dc0da0b18f0b9f2976a545ab917a4eebe8176a3aeac" Namespace="calico-apiserver" Pod="calico-apiserver-7c5c466cb8-7rrbw" WorkloadEndpoint="localhost-k8s-calico--apiserver--7c5c466cb8--7rrbw-eth0" May 10 00:07:27.788319 containerd[1455]: 2025-05-10 00:07:27.752 [INFO][4373] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="776d04603ba0b1e6a8de6dc0da0b18f0b9f2976a545ab917a4eebe8176a3aeac" Namespace="calico-apiserver" Pod="calico-apiserver-7c5c466cb8-7rrbw" WorkloadEndpoint="localhost-k8s-calico--apiserver--7c5c466cb8--7rrbw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7c5c466cb8--7rrbw-eth0", GenerateName:"calico-apiserver-7c5c466cb8-", Namespace:"calico-apiserver", SelfLink:"", UID:"4adb856a-b358-4a75-afdb-0a2493e0d860", ResourceVersion:"722", Generation:0, CreationTimestamp:time.Date(2025, time.May, 10, 0, 7, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7c5c466cb8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"776d04603ba0b1e6a8de6dc0da0b18f0b9f2976a545ab917a4eebe8176a3aeac", Pod:"calico-apiserver-7c5c466cb8-7rrbw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif8866454fa3", MAC:"ae:b8:1d:02:e3:77", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 00:07:27.788319 containerd[1455]: 2025-05-10 00:07:27.776 [INFO][4373] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="776d04603ba0b1e6a8de6dc0da0b18f0b9f2976a545ab917a4eebe8176a3aeac" Namespace="calico-apiserver" Pod="calico-apiserver-7c5c466cb8-7rrbw" WorkloadEndpoint="localhost-k8s-calico--apiserver--7c5c466cb8--7rrbw-eth0" May 10 00:07:27.789574 systemd-resolved[1322]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 10 00:07:27.801166 systemd-resolved[1322]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 10 00:07:27.804476 containerd[1455]: time="2025-05-10T00:07:27.803150991Z" level=info msg="StartContainer for \"46d6131e5a19dc1bbe838363998170ca51d2dbc4bca6669ac6204f595955929c\" returns successfully" May 10 00:07:27.819453 sshd[4716]: Accepted publickey for core from 10.0.0.1 port 34560 ssh2: RSA SHA256:Q9AEy6fzrZ3SovUIgabQR390giiaQRil/rHIVS1a70c May 10 00:07:27.822309 sshd-session[4716]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:07:27.831511 systemd-logind[1429]: New session 9 of user core. May 10 00:07:27.838970 containerd[1455]: time="2025-05-10T00:07:27.838111896Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 00:07:27.838970 containerd[1455]: time="2025-05-10T00:07:27.838956271Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 00:07:27.839090 containerd[1455]: time="2025-05-10T00:07:27.838971111Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:07:27.839090 containerd[1455]: time="2025-05-10T00:07:27.839052593Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:07:27.839284 systemd[1]: Started session-9.scope - Session 9 of User core. May 10 00:07:27.846167 containerd[1455]: time="2025-05-10T00:07:27.844431089Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-kflv6,Uid:c06d69b8-4f38-4476-a0a9-074ed47a6924,Namespace:kube-system,Attempt:4,} returns sandbox id \"f95fc3395d621487392c567baea695acf86770e81514e75f44d8f1fa4133cd0a\"" May 10 00:07:27.848747 kubelet[2619]: E0510 00:07:27.848709 2619 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:07:27.865820 containerd[1455]: time="2025-05-10T00:07:27.865778511Z" level=info msg="CreateContainer within sandbox \"f95fc3395d621487392c567baea695acf86770e81514e75f44d8f1fa4133cd0a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 10 00:07:27.867376 systemd[1]: Started cri-containerd-776d04603ba0b1e6a8de6dc0da0b18f0b9f2976a545ab917a4eebe8176a3aeac.scope - libcontainer container 776d04603ba0b1e6a8de6dc0da0b18f0b9f2976a545ab917a4eebe8176a3aeac. May 10 00:07:27.869823 containerd[1455]: time="2025-05-10T00:07:27.869725581Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c5c466cb8-82f6f,Uid:01ce480b-3a6d-4fd5-af7a-73b802892ab1,Namespace:calico-apiserver,Attempt:4,} returns sandbox id \"36c0aa707cb5c141f99df7bfcb85160c31780fa82d4f7e405685d2caedabac07\"" May 10 00:07:27.885390 systemd-resolved[1322]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 10 00:07:27.886610 containerd[1455]: time="2025-05-10T00:07:27.886559483Z" level=info msg="CreateContainer within sandbox \"f95fc3395d621487392c567baea695acf86770e81514e75f44d8f1fa4133cd0a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"baff0f4e77ee975435e14227b734ac90941f80ce1364ccc129c6157136373d53\"" May 10 00:07:27.887080 containerd[1455]: time="2025-05-10T00:07:27.887049851Z" level=info msg="StartContainer for \"baff0f4e77ee975435e14227b734ac90941f80ce1364ccc129c6157136373d53\"" May 10 00:07:27.919995 containerd[1455]: time="2025-05-10T00:07:27.919748796Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c5c466cb8-7rrbw,Uid:4adb856a-b358-4a75-afdb-0a2493e0d860,Namespace:calico-apiserver,Attempt:4,} returns sandbox id \"776d04603ba0b1e6a8de6dc0da0b18f0b9f2976a545ab917a4eebe8176a3aeac\"" May 10 00:07:27.926007 systemd[1]: Started cri-containerd-baff0f4e77ee975435e14227b734ac90941f80ce1364ccc129c6157136373d53.scope - libcontainer container baff0f4e77ee975435e14227b734ac90941f80ce1364ccc129c6157136373d53. May 10 00:07:27.965323 containerd[1455]: time="2025-05-10T00:07:27.964227552Z" level=info msg="StartContainer for \"baff0f4e77ee975435e14227b734ac90941f80ce1364ccc129c6157136373d53\" returns successfully" May 10 00:07:27.996553 kubelet[2619]: E0510 00:07:27.996201 2619 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:07:28.006482 kubelet[2619]: E0510 00:07:28.005900 2619 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:07:28.011771 kubelet[2619]: I0510 00:07:28.011638 2619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-srzmr" podStartSLOduration=19.011621713 podStartE2EDuration="19.011621713s" podCreationTimestamp="2025-05-10 00:07:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-10 00:07:28.011309268 +0000 UTC m=+34.339242093" watchObservedRunningTime="2025-05-10 00:07:28.011621713 +0000 UTC m=+34.339554538" May 10 00:07:28.019899 kubelet[2619]: E0510 00:07:28.019862 2619 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:07:28.032111 kubelet[2619]: I0510 00:07:28.031972 2619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-kflv6" podStartSLOduration=19.031954346 podStartE2EDuration="19.031954346s" podCreationTimestamp="2025-05-10 00:07:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-10 00:07:28.031544179 +0000 UTC m=+34.359477004" watchObservedRunningTime="2025-05-10 00:07:28.031954346 +0000 UTC m=+34.359887171" May 10 00:07:28.080965 sshd[4802]: Connection closed by 10.0.0.1 port 34560 May 10 00:07:28.082101 sshd-session[4716]: pam_unix(sshd:session): session closed for user core May 10 00:07:28.094208 systemd[1]: sshd@8-10.0.0.141:22-10.0.0.1:34560.service: Deactivated successfully. May 10 00:07:28.096102 systemd[1]: session-9.scope: Deactivated successfully. May 10 00:07:28.097939 systemd-logind[1429]: Session 9 logged out. Waiting for processes to exit. May 10 00:07:28.098924 systemd-logind[1429]: Removed session 9. May 10 00:07:28.759026 systemd-networkd[1386]: cali34b8ccad11a: Gained IPv6LL May 10 00:07:28.759345 systemd-networkd[1386]: caliab3afd87926: Gained IPv6LL May 10 00:07:28.909763 containerd[1455]: time="2025-05-10T00:07:28.909717087Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:07:28.910665 containerd[1455]: time="2025-05-10T00:07:28.910170135Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.3: active requests=0, bytes read=32554116" May 10 00:07:28.911238 containerd[1455]: time="2025-05-10T00:07:28.911190273Z" level=info msg="ImageCreate event name:\"sha256:ec7c64189a2fd01b24b044fea1840d441e9884a0df32c2e9d6982cfbbea1f814\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:07:28.913336 containerd[1455]: time="2025-05-10T00:07:28.913297989Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:07:28.914124 containerd[1455]: time="2025-05-10T00:07:28.914094963Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" with image id \"sha256:ec7c64189a2fd01b24b044fea1840d441e9884a0df32c2e9d6982cfbbea1f814\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\", size \"33923266\" in 1.260260403s" May 10 00:07:28.914184 containerd[1455]: time="2025-05-10T00:07:28.914129964Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" returns image reference \"sha256:ec7c64189a2fd01b24b044fea1840d441e9884a0df32c2e9d6982cfbbea1f814\"" May 10 00:07:28.915867 containerd[1455]: time="2025-05-10T00:07:28.915667550Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\"" May 10 00:07:28.923784 containerd[1455]: time="2025-05-10T00:07:28.923732490Z" level=info msg="CreateContainer within sandbox \"cdc558cf6ea79ecc171c37ff2a500cad51014f5c071f06f38e3268738b6ceed1\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" May 10 00:07:28.935613 containerd[1455]: time="2025-05-10T00:07:28.935554575Z" level=info msg="CreateContainer within sandbox \"cdc558cf6ea79ecc171c37ff2a500cad51014f5c071f06f38e3268738b6ceed1\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"47757da207b3e80797731651a71b4ebb1e788f53c2a91a47b7642f96b73839f1\"" May 10 00:07:28.936498 containerd[1455]: time="2025-05-10T00:07:28.936462031Z" level=info msg="StartContainer for \"47757da207b3e80797731651a71b4ebb1e788f53c2a91a47b7642f96b73839f1\"" May 10 00:07:28.985052 systemd[1]: Started cri-containerd-47757da207b3e80797731651a71b4ebb1e788f53c2a91a47b7642f96b73839f1.scope - libcontainer container 47757da207b3e80797731651a71b4ebb1e788f53c2a91a47b7642f96b73839f1. May 10 00:07:29.055194 containerd[1455]: time="2025-05-10T00:07:29.054667413Z" level=info msg="StartContainer for \"47757da207b3e80797731651a71b4ebb1e788f53c2a91a47b7642f96b73839f1\" returns successfully" May 10 00:07:29.065051 kubelet[2619]: E0510 00:07:29.063460 2619 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:07:29.065051 kubelet[2619]: E0510 00:07:29.063497 2619 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:07:29.066675 kubelet[2619]: E0510 00:07:29.065512 2619 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:07:29.080520 kubelet[2619]: I0510 00:07:29.080462 2619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-5c95969b9-5mpjw" podStartSLOduration=12.818876301 podStartE2EDuration="14.080443607s" podCreationTimestamp="2025-05-10 00:07:15 +0000 UTC" firstStartedPulling="2025-05-10 00:07:27.653509994 +0000 UTC m=+33.981442819" lastFinishedPulling="2025-05-10 00:07:28.91507734 +0000 UTC m=+35.243010125" observedRunningTime="2025-05-10 00:07:29.078599856 +0000 UTC m=+35.406532641" watchObservedRunningTime="2025-05-10 00:07:29.080443607 +0000 UTC m=+35.408376432" May 10 00:07:29.269999 systemd-networkd[1386]: calicb916e63fed: Gained IPv6LL May 10 00:07:29.333975 systemd-networkd[1386]: cali400611439f1: Gained IPv6LL May 10 00:07:29.334255 systemd-networkd[1386]: calif8866454fa3: Gained IPv6LL May 10 00:07:29.462006 systemd-networkd[1386]: cali10222e0e31c: Gained IPv6LL May 10 00:07:29.820702 containerd[1455]: time="2025-05-10T00:07:29.820659225Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:07:29.822092 containerd[1455]: time="2025-05-10T00:07:29.822038808Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.3: active requests=0, bytes read=7474935" May 10 00:07:29.822954 containerd[1455]: time="2025-05-10T00:07:29.822901342Z" level=info msg="ImageCreate event name:\"sha256:15faf29e8b518d846c91c15785ff89e783d356ea0f2b22826f47a556ea32645b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:07:29.825578 containerd[1455]: time="2025-05-10T00:07:29.825336543Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:07:29.826287 containerd[1455]: time="2025-05-10T00:07:29.826248479Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.3\" with image id \"sha256:15faf29e8b518d846c91c15785ff89e783d356ea0f2b22826f47a556ea32645b\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\", size \"8844117\" in 910.550088ms" May 10 00:07:29.826287 containerd[1455]: time="2025-05-10T00:07:29.826281319Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\" returns image reference \"sha256:15faf29e8b518d846c91c15785ff89e783d356ea0f2b22826f47a556ea32645b\"" May 10 00:07:29.827989 containerd[1455]: time="2025-05-10T00:07:29.827870346Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" May 10 00:07:29.828647 containerd[1455]: time="2025-05-10T00:07:29.828604798Z" level=info msg="CreateContainer within sandbox \"2a3cf9f6b40683112821745fe63014ddd0a7f3abd8295708d9063fe3c3a13ff4\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" May 10 00:07:29.861680 containerd[1455]: time="2025-05-10T00:07:29.861623194Z" level=info msg="CreateContainer within sandbox \"2a3cf9f6b40683112821745fe63014ddd0a7f3abd8295708d9063fe3c3a13ff4\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"e2f0052fc39f79172e76023423306d124191b750bfeb4235ee01033046afe2b6\"" May 10 00:07:29.862153 containerd[1455]: time="2025-05-10T00:07:29.862125323Z" level=info msg="StartContainer for \"e2f0052fc39f79172e76023423306d124191b750bfeb4235ee01033046afe2b6\"" May 10 00:07:29.893070 systemd[1]: Started cri-containerd-e2f0052fc39f79172e76023423306d124191b750bfeb4235ee01033046afe2b6.scope - libcontainer container e2f0052fc39f79172e76023423306d124191b750bfeb4235ee01033046afe2b6. May 10 00:07:29.920973 containerd[1455]: time="2025-05-10T00:07:29.920797510Z" level=info msg="StartContainer for \"e2f0052fc39f79172e76023423306d124191b750bfeb4235ee01033046afe2b6\" returns successfully" May 10 00:07:30.067339 kubelet[2619]: E0510 00:07:30.067301 2619 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:07:30.067786 kubelet[2619]: I0510 00:07:30.067660 2619 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 10 00:07:30.068534 kubelet[2619]: E0510 00:07:30.068477 2619 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:07:31.069701 kubelet[2619]: E0510 00:07:31.069670 2619 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:07:31.269488 containerd[1455]: time="2025-05-10T00:07:31.269429159Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:07:31.270263 containerd[1455]: time="2025-05-10T00:07:31.270241132Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=40247603" May 10 00:07:31.271661 containerd[1455]: time="2025-05-10T00:07:31.271630434Z" level=info msg="ImageCreate event name:\"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:07:31.274662 containerd[1455]: time="2025-05-10T00:07:31.274625522Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:07:31.275287 containerd[1455]: time="2025-05-10T00:07:31.275214331Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"41616801\" in 1.447221823s" May 10 00:07:31.275287 containerd[1455]: time="2025-05-10T00:07:31.275238172Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\"" May 10 00:07:31.277261 containerd[1455]: time="2025-05-10T00:07:31.277072881Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" May 10 00:07:31.277941 containerd[1455]: time="2025-05-10T00:07:31.277909294Z" level=info msg="CreateContainer within sandbox \"36c0aa707cb5c141f99df7bfcb85160c31780fa82d4f7e405685d2caedabac07\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 10 00:07:31.293368 containerd[1455]: time="2025-05-10T00:07:31.293228298Z" level=info msg="CreateContainer within sandbox \"36c0aa707cb5c141f99df7bfcb85160c31780fa82d4f7e405685d2caedabac07\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"d080100e9b74c951f7b51f731b2206ab743b18d940619e4c2ff42ed6ae52cb56\"" May 10 00:07:31.293368 containerd[1455]: time="2025-05-10T00:07:31.293960389Z" level=info msg="StartContainer for \"d080100e9b74c951f7b51f731b2206ab743b18d940619e4c2ff42ed6ae52cb56\"" May 10 00:07:31.330031 systemd[1]: Started cri-containerd-d080100e9b74c951f7b51f731b2206ab743b18d940619e4c2ff42ed6ae52cb56.scope - libcontainer container d080100e9b74c951f7b51f731b2206ab743b18d940619e4c2ff42ed6ae52cb56. May 10 00:07:31.374429 containerd[1455]: time="2025-05-10T00:07:31.374354828Z" level=info msg="StartContainer for \"d080100e9b74c951f7b51f731b2206ab743b18d940619e4c2ff42ed6ae52cb56\" returns successfully" May 10 00:07:31.497185 containerd[1455]: time="2025-05-10T00:07:31.497134460Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:07:31.497316 containerd[1455]: time="2025-05-10T00:07:31.497194221Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=77" May 10 00:07:31.499345 containerd[1455]: time="2025-05-10T00:07:31.499313415Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"41616801\" in 222.207414ms" May 10 00:07:31.499409 containerd[1455]: time="2025-05-10T00:07:31.499348655Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\"" May 10 00:07:31.500608 containerd[1455]: time="2025-05-10T00:07:31.500236630Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\"" May 10 00:07:31.504002 containerd[1455]: time="2025-05-10T00:07:31.503961529Z" level=info msg="CreateContainer within sandbox \"776d04603ba0b1e6a8de6dc0da0b18f0b9f2976a545ab917a4eebe8176a3aeac\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 10 00:07:31.518799 containerd[1455]: time="2025-05-10T00:07:31.518729004Z" level=info msg="CreateContainer within sandbox \"776d04603ba0b1e6a8de6dc0da0b18f0b9f2976a545ab917a4eebe8176a3aeac\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"e59314b618d93993d566c9052371dcb2cc80529bc77400c7966d8bf043c6613f\"" May 10 00:07:31.519238 containerd[1455]: time="2025-05-10T00:07:31.519210531Z" level=info msg="StartContainer for \"e59314b618d93993d566c9052371dcb2cc80529bc77400c7966d8bf043c6613f\"" May 10 00:07:31.553992 systemd[1]: Started cri-containerd-e59314b618d93993d566c9052371dcb2cc80529bc77400c7966d8bf043c6613f.scope - libcontainer container e59314b618d93993d566c9052371dcb2cc80529bc77400c7966d8bf043c6613f. May 10 00:07:31.599050 containerd[1455]: time="2025-05-10T00:07:31.598802517Z" level=info msg="StartContainer for \"e59314b618d93993d566c9052371dcb2cc80529bc77400c7966d8bf043c6613f\" returns successfully" May 10 00:07:32.094944 kubelet[2619]: I0510 00:07:32.093507 2619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7c5c466cb8-7rrbw" podStartSLOduration=13.515703919 podStartE2EDuration="17.093490225s" podCreationTimestamp="2025-05-10 00:07:15 +0000 UTC" firstStartedPulling="2025-05-10 00:07:27.922264721 +0000 UTC m=+34.250197506" lastFinishedPulling="2025-05-10 00:07:31.500050987 +0000 UTC m=+37.827983812" observedRunningTime="2025-05-10 00:07:32.092608131 +0000 UTC m=+38.420540956" watchObservedRunningTime="2025-05-10 00:07:32.093490225 +0000 UTC m=+38.421423050" May 10 00:07:32.110450 kubelet[2619]: I0510 00:07:32.109736 2619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7c5c466cb8-82f6f" podStartSLOduration=13.70822728 podStartE2EDuration="17.109720156s" podCreationTimestamp="2025-05-10 00:07:15 +0000 UTC" firstStartedPulling="2025-05-10 00:07:27.874603869 +0000 UTC m=+34.202536694" lastFinishedPulling="2025-05-10 00:07:31.276096745 +0000 UTC m=+37.604029570" observedRunningTime="2025-05-10 00:07:32.108944104 +0000 UTC m=+38.436876969" watchObservedRunningTime="2025-05-10 00:07:32.109720156 +0000 UTC m=+38.437652941" May 10 00:07:32.643611 containerd[1455]: time="2025-05-10T00:07:32.642868090Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:07:32.644088 containerd[1455]: time="2025-05-10T00:07:32.643657702Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3: active requests=0, bytes read=13124299" May 10 00:07:32.647304 containerd[1455]: time="2025-05-10T00:07:32.644649317Z" level=info msg="ImageCreate event name:\"sha256:a91b1f00752edc175f270a01b33683fa80818734aa2274388785eaf3364315dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:07:32.649027 containerd[1455]: time="2025-05-10T00:07:32.648937664Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:07:32.650567 containerd[1455]: time="2025-05-10T00:07:32.650311005Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" with image id \"sha256:a91b1f00752edc175f270a01b33683fa80818734aa2274388785eaf3364315dc\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\", size \"14493433\" in 1.150044855s" May 10 00:07:32.650567 containerd[1455]: time="2025-05-10T00:07:32.650360246Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" returns image reference \"sha256:a91b1f00752edc175f270a01b33683fa80818734aa2274388785eaf3364315dc\"" May 10 00:07:32.654128 containerd[1455]: time="2025-05-10T00:07:32.654099743Z" level=info msg="CreateContainer within sandbox \"2a3cf9f6b40683112821745fe63014ddd0a7f3abd8295708d9063fe3c3a13ff4\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" May 10 00:07:32.670714 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3380771654.mount: Deactivated successfully. May 10 00:07:32.685771 containerd[1455]: time="2025-05-10T00:07:32.685723513Z" level=info msg="CreateContainer within sandbox \"2a3cf9f6b40683112821745fe63014ddd0a7f3abd8295708d9063fe3c3a13ff4\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"173742c408d1ce93d77b3da5540552ed396c2ad142a749ef085ba7b78e41d71b\"" May 10 00:07:32.687619 containerd[1455]: time="2025-05-10T00:07:32.687149815Z" level=info msg="StartContainer for \"173742c408d1ce93d77b3da5540552ed396c2ad142a749ef085ba7b78e41d71b\"" May 10 00:07:32.726030 systemd[1]: Started cri-containerd-173742c408d1ce93d77b3da5540552ed396c2ad142a749ef085ba7b78e41d71b.scope - libcontainer container 173742c408d1ce93d77b3da5540552ed396c2ad142a749ef085ba7b78e41d71b. May 10 00:07:32.802326 containerd[1455]: time="2025-05-10T00:07:32.802268237Z" level=info msg="StartContainer for \"173742c408d1ce93d77b3da5540552ed396c2ad142a749ef085ba7b78e41d71b\" returns successfully" May 10 00:07:33.091675 kubelet[2619]: I0510 00:07:33.091579 2619 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 10 00:07:33.091675 kubelet[2619]: I0510 00:07:33.091635 2619 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 10 00:07:33.096237 systemd[1]: Started sshd@9-10.0.0.141:22-10.0.0.1:39004.service - OpenSSH per-connection server daemon (10.0.0.1:39004). May 10 00:07:33.105556 kubelet[2619]: I0510 00:07:33.104206 2619 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-6hd28" podStartSLOduration=13.238869579 podStartE2EDuration="18.103365419s" podCreationTimestamp="2025-05-10 00:07:15 +0000 UTC" firstStartedPulling="2025-05-10 00:07:27.787533391 +0000 UTC m=+34.115466216" lastFinishedPulling="2025-05-10 00:07:32.652029231 +0000 UTC m=+38.979962056" observedRunningTime="2025-05-10 00:07:33.103278738 +0000 UTC m=+39.431211563" watchObservedRunningTime="2025-05-10 00:07:33.103365419 +0000 UTC m=+39.431298244" May 10 00:07:33.163306 sshd[5349]: Accepted publickey for core from 10.0.0.1 port 39004 ssh2: RSA SHA256:Q9AEy6fzrZ3SovUIgabQR390giiaQRil/rHIVS1a70c May 10 00:07:33.165104 sshd-session[5349]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:07:33.169273 systemd-logind[1429]: New session 10 of user core. May 10 00:07:33.180043 systemd[1]: Started session-10.scope - Session 10 of User core. May 10 00:07:33.366040 sshd[5353]: Connection closed by 10.0.0.1 port 39004 May 10 00:07:33.366345 sshd-session[5349]: pam_unix(sshd:session): session closed for user core May 10 00:07:33.376530 systemd[1]: sshd@9-10.0.0.141:22-10.0.0.1:39004.service: Deactivated successfully. May 10 00:07:33.378252 systemd[1]: session-10.scope: Deactivated successfully. May 10 00:07:33.379536 systemd-logind[1429]: Session 10 logged out. Waiting for processes to exit. May 10 00:07:33.387580 systemd[1]: Started sshd@10-10.0.0.141:22-10.0.0.1:39016.service - OpenSSH per-connection server daemon (10.0.0.1:39016). May 10 00:07:33.388513 systemd-logind[1429]: Removed session 10. May 10 00:07:33.425317 sshd[5366]: Accepted publickey for core from 10.0.0.1 port 39016 ssh2: RSA SHA256:Q9AEy6fzrZ3SovUIgabQR390giiaQRil/rHIVS1a70c May 10 00:07:33.426569 sshd-session[5366]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:07:33.434389 systemd-logind[1429]: New session 11 of user core. May 10 00:07:33.443049 systemd[1]: Started session-11.scope - Session 11 of User core. May 10 00:07:33.678814 sshd[5369]: Connection closed by 10.0.0.1 port 39016 May 10 00:07:33.679313 sshd-session[5366]: pam_unix(sshd:session): session closed for user core May 10 00:07:33.691829 systemd[1]: sshd@10-10.0.0.141:22-10.0.0.1:39016.service: Deactivated successfully. May 10 00:07:33.694737 systemd[1]: session-11.scope: Deactivated successfully. May 10 00:07:33.697895 systemd-logind[1429]: Session 11 logged out. Waiting for processes to exit. May 10 00:07:33.705221 systemd[1]: Started sshd@11-10.0.0.141:22-10.0.0.1:39022.service - OpenSSH per-connection server daemon (10.0.0.1:39022). May 10 00:07:33.706163 systemd-logind[1429]: Removed session 11. May 10 00:07:33.728119 kubelet[2619]: I0510 00:07:33.727955 2619 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 10 00:07:33.731225 kubelet[2619]: E0510 00:07:33.730323 2619 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:07:33.777209 sshd[5403]: Accepted publickey for core from 10.0.0.1 port 39022 ssh2: RSA SHA256:Q9AEy6fzrZ3SovUIgabQR390giiaQRil/rHIVS1a70c May 10 00:07:33.779052 sshd-session[5403]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:07:33.783443 systemd-logind[1429]: New session 12 of user core. May 10 00:07:33.791028 systemd[1]: Started session-12.scope - Session 12 of User core. May 10 00:07:33.845982 kubelet[2619]: I0510 00:07:33.845905 2619 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 May 10 00:07:33.859540 kubelet[2619]: I0510 00:07:33.859220 2619 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock May 10 00:07:34.023195 sshd[5407]: Connection closed by 10.0.0.1 port 39022 May 10 00:07:34.023556 sshd-session[5403]: pam_unix(sshd:session): session closed for user core May 10 00:07:34.027079 systemd[1]: sshd@11-10.0.0.141:22-10.0.0.1:39022.service: Deactivated successfully. May 10 00:07:34.029070 systemd[1]: session-12.scope: Deactivated successfully. May 10 00:07:34.029677 systemd-logind[1429]: Session 12 logged out. Waiting for processes to exit. May 10 00:07:34.030670 systemd-logind[1429]: Removed session 12. May 10 00:07:34.094641 kubelet[2619]: E0510 00:07:34.094542 2619 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:07:34.488889 kernel: bpftool[5443]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set May 10 00:07:34.724188 systemd-networkd[1386]: vxlan.calico: Link UP May 10 00:07:34.724199 systemd-networkd[1386]: vxlan.calico: Gained carrier May 10 00:07:36.437983 systemd-networkd[1386]: vxlan.calico: Gained IPv6LL May 10 00:07:39.036706 systemd[1]: Started sshd@12-10.0.0.141:22-10.0.0.1:39038.service - OpenSSH per-connection server daemon (10.0.0.1:39038). May 10 00:07:39.092977 sshd[5570]: Accepted publickey for core from 10.0.0.1 port 39038 ssh2: RSA SHA256:Q9AEy6fzrZ3SovUIgabQR390giiaQRil/rHIVS1a70c May 10 00:07:39.094651 sshd-session[5570]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:07:39.101979 systemd-logind[1429]: New session 13 of user core. May 10 00:07:39.114061 systemd[1]: Started session-13.scope - Session 13 of User core. May 10 00:07:39.314990 sshd[5572]: Connection closed by 10.0.0.1 port 39038 May 10 00:07:39.316512 sshd-session[5570]: pam_unix(sshd:session): session closed for user core May 10 00:07:39.330231 systemd[1]: sshd@12-10.0.0.141:22-10.0.0.1:39038.service: Deactivated successfully. May 10 00:07:39.332454 systemd[1]: session-13.scope: Deactivated successfully. May 10 00:07:39.333982 systemd-logind[1429]: Session 13 logged out. Waiting for processes to exit. May 10 00:07:39.344140 systemd[1]: Started sshd@13-10.0.0.141:22-10.0.0.1:39046.service - OpenSSH per-connection server daemon (10.0.0.1:39046). May 10 00:07:39.345416 systemd-logind[1429]: Removed session 13. May 10 00:07:39.386386 sshd[5584]: Accepted publickey for core from 10.0.0.1 port 39046 ssh2: RSA SHA256:Q9AEy6fzrZ3SovUIgabQR390giiaQRil/rHIVS1a70c May 10 00:07:39.387795 sshd-session[5584]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:07:39.392072 systemd-logind[1429]: New session 14 of user core. May 10 00:07:39.396015 systemd[1]: Started session-14.scope - Session 14 of User core. May 10 00:07:39.615084 sshd[5586]: Connection closed by 10.0.0.1 port 39046 May 10 00:07:39.614817 sshd-session[5584]: pam_unix(sshd:session): session closed for user core May 10 00:07:39.627669 systemd[1]: sshd@13-10.0.0.141:22-10.0.0.1:39046.service: Deactivated successfully. May 10 00:07:39.630525 systemd[1]: session-14.scope: Deactivated successfully. May 10 00:07:39.632094 systemd-logind[1429]: Session 14 logged out. Waiting for processes to exit. May 10 00:07:39.642450 systemd[1]: Started sshd@14-10.0.0.141:22-10.0.0.1:39050.service - OpenSSH per-connection server daemon (10.0.0.1:39050). May 10 00:07:39.643807 systemd-logind[1429]: Removed session 14. May 10 00:07:39.683527 sshd[5597]: Accepted publickey for core from 10.0.0.1 port 39050 ssh2: RSA SHA256:Q9AEy6fzrZ3SovUIgabQR390giiaQRil/rHIVS1a70c May 10 00:07:39.685035 sshd-session[5597]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:07:39.690534 systemd-logind[1429]: New session 15 of user core. May 10 00:07:39.700062 systemd[1]: Started session-15.scope - Session 15 of User core. May 10 00:07:41.128624 sshd[5599]: Connection closed by 10.0.0.1 port 39050 May 10 00:07:41.129208 sshd-session[5597]: pam_unix(sshd:session): session closed for user core May 10 00:07:41.144360 systemd[1]: Started sshd@15-10.0.0.141:22-10.0.0.1:39066.service - OpenSSH per-connection server daemon (10.0.0.1:39066). May 10 00:07:41.144882 systemd[1]: sshd@14-10.0.0.141:22-10.0.0.1:39050.service: Deactivated successfully. May 10 00:07:41.150691 systemd[1]: session-15.scope: Deactivated successfully. May 10 00:07:41.154023 systemd-logind[1429]: Session 15 logged out. Waiting for processes to exit. May 10 00:07:41.158419 systemd-logind[1429]: Removed session 15. May 10 00:07:41.201863 sshd[5625]: Accepted publickey for core from 10.0.0.1 port 39066 ssh2: RSA SHA256:Q9AEy6fzrZ3SovUIgabQR390giiaQRil/rHIVS1a70c May 10 00:07:41.203604 sshd-session[5625]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:07:41.207766 systemd-logind[1429]: New session 16 of user core. May 10 00:07:41.223106 systemd[1]: Started session-16.scope - Session 16 of User core. May 10 00:07:41.543635 sshd[5631]: Connection closed by 10.0.0.1 port 39066 May 10 00:07:41.545381 sshd-session[5625]: pam_unix(sshd:session): session closed for user core May 10 00:07:41.557914 systemd[1]: sshd@15-10.0.0.141:22-10.0.0.1:39066.service: Deactivated successfully. May 10 00:07:41.561109 systemd[1]: session-16.scope: Deactivated successfully. May 10 00:07:41.563450 systemd-logind[1429]: Session 16 logged out. Waiting for processes to exit. May 10 00:07:41.569626 systemd[1]: Started sshd@16-10.0.0.141:22-10.0.0.1:39080.service - OpenSSH per-connection server daemon (10.0.0.1:39080). May 10 00:07:41.571474 systemd-logind[1429]: Removed session 16. May 10 00:07:41.611161 sshd[5642]: Accepted publickey for core from 10.0.0.1 port 39080 ssh2: RSA SHA256:Q9AEy6fzrZ3SovUIgabQR390giiaQRil/rHIVS1a70c May 10 00:07:41.612458 sshd-session[5642]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:07:41.616494 systemd-logind[1429]: New session 17 of user core. May 10 00:07:41.628076 systemd[1]: Started session-17.scope - Session 17 of User core. May 10 00:07:41.778034 sshd[5644]: Connection closed by 10.0.0.1 port 39080 May 10 00:07:41.778402 sshd-session[5642]: pam_unix(sshd:session): session closed for user core May 10 00:07:41.781702 systemd-logind[1429]: Session 17 logged out. Waiting for processes to exit. May 10 00:07:41.782085 systemd[1]: sshd@16-10.0.0.141:22-10.0.0.1:39080.service: Deactivated successfully. May 10 00:07:41.783804 systemd[1]: session-17.scope: Deactivated successfully. May 10 00:07:41.784586 systemd-logind[1429]: Removed session 17. May 10 00:07:46.789798 systemd[1]: Started sshd@17-10.0.0.141:22-10.0.0.1:60608.service - OpenSSH per-connection server daemon (10.0.0.1:60608). May 10 00:07:46.835653 sshd[5667]: Accepted publickey for core from 10.0.0.1 port 60608 ssh2: RSA SHA256:Q9AEy6fzrZ3SovUIgabQR390giiaQRil/rHIVS1a70c May 10 00:07:46.837077 sshd-session[5667]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:07:46.841059 systemd-logind[1429]: New session 18 of user core. May 10 00:07:46.847045 systemd[1]: Started session-18.scope - Session 18 of User core. May 10 00:07:46.982581 sshd[5669]: Connection closed by 10.0.0.1 port 60608 May 10 00:07:46.982761 sshd-session[5667]: pam_unix(sshd:session): session closed for user core May 10 00:07:46.986497 systemd[1]: sshd@17-10.0.0.141:22-10.0.0.1:60608.service: Deactivated successfully. May 10 00:07:46.989648 systemd[1]: session-18.scope: Deactivated successfully. May 10 00:07:46.990402 systemd-logind[1429]: Session 18 logged out. Waiting for processes to exit. May 10 00:07:46.991690 systemd-logind[1429]: Removed session 18. May 10 00:07:47.823302 kubelet[2619]: E0510 00:07:47.822610 2619 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:07:49.168039 kubelet[2619]: I0510 00:07:49.167870 2619 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 10 00:07:49.237943 systemd[1]: run-containerd-runc-k8s.io-47757da207b3e80797731651a71b4ebb1e788f53c2a91a47b7642f96b73839f1-runc.8DLn14.mount: Deactivated successfully. May 10 00:07:51.993795 systemd[1]: Started sshd@18-10.0.0.141:22-10.0.0.1:60616.service - OpenSSH per-connection server daemon (10.0.0.1:60616). May 10 00:07:52.041946 sshd[5746]: Accepted publickey for core from 10.0.0.1 port 60616 ssh2: RSA SHA256:Q9AEy6fzrZ3SovUIgabQR390giiaQRil/rHIVS1a70c May 10 00:07:52.043316 sshd-session[5746]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:07:52.047191 systemd-logind[1429]: New session 19 of user core. May 10 00:07:52.056041 systemd[1]: Started session-19.scope - Session 19 of User core. May 10 00:07:52.187264 sshd[5748]: Connection closed by 10.0.0.1 port 60616 May 10 00:07:52.187798 sshd-session[5746]: pam_unix(sshd:session): session closed for user core May 10 00:07:52.190786 systemd[1]: sshd@18-10.0.0.141:22-10.0.0.1:60616.service: Deactivated successfully. May 10 00:07:52.192489 systemd[1]: session-19.scope: Deactivated successfully. May 10 00:07:52.193764 systemd-logind[1429]: Session 19 logged out. Waiting for processes to exit. May 10 00:07:52.194767 systemd-logind[1429]: Removed session 19. May 10 00:07:53.742871 containerd[1455]: time="2025-05-10T00:07:53.742818739Z" level=info msg="StopPodSandbox for \"c19924cbf291cfedb2344018f82e64d64b01b1d95b185c4d2455bf23ec598274\"" May 10 00:07:53.743244 containerd[1455]: time="2025-05-10T00:07:53.743014501Z" level=info msg="TearDown network for sandbox \"c19924cbf291cfedb2344018f82e64d64b01b1d95b185c4d2455bf23ec598274\" successfully" May 10 00:07:53.743244 containerd[1455]: time="2025-05-10T00:07:53.743027661Z" level=info msg="StopPodSandbox for \"c19924cbf291cfedb2344018f82e64d64b01b1d95b185c4d2455bf23ec598274\" returns successfully" May 10 00:07:53.743432 containerd[1455]: time="2025-05-10T00:07:53.743390345Z" level=info msg="RemovePodSandbox for \"c19924cbf291cfedb2344018f82e64d64b01b1d95b185c4d2455bf23ec598274\"" May 10 00:07:53.743432 containerd[1455]: time="2025-05-10T00:07:53.743422745Z" level=info msg="Forcibly stopping sandbox \"c19924cbf291cfedb2344018f82e64d64b01b1d95b185c4d2455bf23ec598274\"" May 10 00:07:53.743550 containerd[1455]: time="2025-05-10T00:07:53.743495946Z" level=info msg="TearDown network for sandbox \"c19924cbf291cfedb2344018f82e64d64b01b1d95b185c4d2455bf23ec598274\" successfully" May 10 00:07:53.764251 containerd[1455]: time="2025-05-10T00:07:53.764137209Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c19924cbf291cfedb2344018f82e64d64b01b1d95b185c4d2455bf23ec598274\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 10 00:07:53.764251 containerd[1455]: time="2025-05-10T00:07:53.764259850Z" level=info msg="RemovePodSandbox \"c19924cbf291cfedb2344018f82e64d64b01b1d95b185c4d2455bf23ec598274\" returns successfully" May 10 00:07:53.765084 containerd[1455]: time="2025-05-10T00:07:53.765045019Z" level=info msg="StopPodSandbox for \"25dbe61311c0d9faceee9b8b3cc0344ea5d46d787497e1c3a8891804cbfccca0\"" May 10 00:07:53.765857 containerd[1455]: time="2025-05-10T00:07:53.765151500Z" level=info msg="TearDown network for sandbox \"25dbe61311c0d9faceee9b8b3cc0344ea5d46d787497e1c3a8891804cbfccca0\" successfully" May 10 00:07:53.765857 containerd[1455]: time="2025-05-10T00:07:53.765164980Z" level=info msg="StopPodSandbox for \"25dbe61311c0d9faceee9b8b3cc0344ea5d46d787497e1c3a8891804cbfccca0\" returns successfully" May 10 00:07:53.765857 containerd[1455]: time="2025-05-10T00:07:53.765418623Z" level=info msg="RemovePodSandbox for \"25dbe61311c0d9faceee9b8b3cc0344ea5d46d787497e1c3a8891804cbfccca0\"" May 10 00:07:53.765857 containerd[1455]: time="2025-05-10T00:07:53.765439143Z" level=info msg="Forcibly stopping sandbox \"25dbe61311c0d9faceee9b8b3cc0344ea5d46d787497e1c3a8891804cbfccca0\"" May 10 00:07:53.765857 containerd[1455]: time="2025-05-10T00:07:53.765496464Z" level=info msg="TearDown network for sandbox \"25dbe61311c0d9faceee9b8b3cc0344ea5d46d787497e1c3a8891804cbfccca0\" successfully" May 10 00:07:53.768298 containerd[1455]: time="2025-05-10T00:07:53.768252453Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"25dbe61311c0d9faceee9b8b3cc0344ea5d46d787497e1c3a8891804cbfccca0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 10 00:07:53.768363 containerd[1455]: time="2025-05-10T00:07:53.768316454Z" level=info msg="RemovePodSandbox \"25dbe61311c0d9faceee9b8b3cc0344ea5d46d787497e1c3a8891804cbfccca0\" returns successfully" May 10 00:07:53.769037 containerd[1455]: time="2025-05-10T00:07:53.768868540Z" level=info msg="StopPodSandbox for \"adac49ec9355a7516d3e3e03c3d90bf95280ea4cc39e496c40aa1f106fe9b11b\"" May 10 00:07:53.769037 containerd[1455]: time="2025-05-10T00:07:53.768967941Z" level=info msg="TearDown network for sandbox \"adac49ec9355a7516d3e3e03c3d90bf95280ea4cc39e496c40aa1f106fe9b11b\" successfully" May 10 00:07:53.769037 containerd[1455]: time="2025-05-10T00:07:53.768978381Z" level=info msg="StopPodSandbox for \"adac49ec9355a7516d3e3e03c3d90bf95280ea4cc39e496c40aa1f106fe9b11b\" returns successfully" May 10 00:07:53.770879 containerd[1455]: time="2025-05-10T00:07:53.769397826Z" level=info msg="RemovePodSandbox for \"adac49ec9355a7516d3e3e03c3d90bf95280ea4cc39e496c40aa1f106fe9b11b\"" May 10 00:07:53.770879 containerd[1455]: time="2025-05-10T00:07:53.769424186Z" level=info msg="Forcibly stopping sandbox \"adac49ec9355a7516d3e3e03c3d90bf95280ea4cc39e496c40aa1f106fe9b11b\"" May 10 00:07:53.770879 containerd[1455]: time="2025-05-10T00:07:53.769735309Z" level=info msg="TearDown network for sandbox \"adac49ec9355a7516d3e3e03c3d90bf95280ea4cc39e496c40aa1f106fe9b11b\" successfully" May 10 00:07:53.772437 containerd[1455]: time="2025-05-10T00:07:53.772377138Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"adac49ec9355a7516d3e3e03c3d90bf95280ea4cc39e496c40aa1f106fe9b11b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 10 00:07:53.772534 containerd[1455]: time="2025-05-10T00:07:53.772447419Z" level=info msg="RemovePodSandbox \"adac49ec9355a7516d3e3e03c3d90bf95280ea4cc39e496c40aa1f106fe9b11b\" returns successfully" May 10 00:07:53.773320 containerd[1455]: time="2025-05-10T00:07:53.772800502Z" level=info msg="StopPodSandbox for \"5fe645fe3987a1d20d67a620a0d4a65840958f7a9582f58c12c0c240cfaaac0a\"" May 10 00:07:53.773320 containerd[1455]: time="2025-05-10T00:07:53.772905744Z" level=info msg="TearDown network for sandbox \"5fe645fe3987a1d20d67a620a0d4a65840958f7a9582f58c12c0c240cfaaac0a\" successfully" May 10 00:07:53.773320 containerd[1455]: time="2025-05-10T00:07:53.772917344Z" level=info msg="StopPodSandbox for \"5fe645fe3987a1d20d67a620a0d4a65840958f7a9582f58c12c0c240cfaaac0a\" returns successfully" May 10 00:07:53.773320 containerd[1455]: time="2025-05-10T00:07:53.773278788Z" level=info msg="RemovePodSandbox for \"5fe645fe3987a1d20d67a620a0d4a65840958f7a9582f58c12c0c240cfaaac0a\"" May 10 00:07:53.773320 containerd[1455]: time="2025-05-10T00:07:53.773308628Z" level=info msg="Forcibly stopping sandbox \"5fe645fe3987a1d20d67a620a0d4a65840958f7a9582f58c12c0c240cfaaac0a\"" May 10 00:07:53.773489 containerd[1455]: time="2025-05-10T00:07:53.773367669Z" level=info msg="TearDown network for sandbox \"5fe645fe3987a1d20d67a620a0d4a65840958f7a9582f58c12c0c240cfaaac0a\" successfully" May 10 00:07:53.776554 containerd[1455]: time="2025-05-10T00:07:53.776506062Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5fe645fe3987a1d20d67a620a0d4a65840958f7a9582f58c12c0c240cfaaac0a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 10 00:07:53.776631 containerd[1455]: time="2025-05-10T00:07:53.776572503Z" level=info msg="RemovePodSandbox \"5fe645fe3987a1d20d67a620a0d4a65840958f7a9582f58c12c0c240cfaaac0a\" returns successfully" May 10 00:07:53.777860 containerd[1455]: time="2025-05-10T00:07:53.777822917Z" level=info msg="StopPodSandbox for \"bbdc02b92325d3eaa887dc6c4a0b35702fe8ec0e0c9d4ff1a6cb0c619f3dbad6\"" May 10 00:07:53.778018 containerd[1455]: time="2025-05-10T00:07:53.777932838Z" level=info msg="TearDown network for sandbox \"bbdc02b92325d3eaa887dc6c4a0b35702fe8ec0e0c9d4ff1a6cb0c619f3dbad6\" successfully" May 10 00:07:53.778018 containerd[1455]: time="2025-05-10T00:07:53.777942998Z" level=info msg="StopPodSandbox for \"bbdc02b92325d3eaa887dc6c4a0b35702fe8ec0e0c9d4ff1a6cb0c619f3dbad6\" returns successfully" May 10 00:07:53.778385 containerd[1455]: time="2025-05-10T00:07:53.778250041Z" level=info msg="RemovePodSandbox for \"bbdc02b92325d3eaa887dc6c4a0b35702fe8ec0e0c9d4ff1a6cb0c619f3dbad6\"" May 10 00:07:53.778385 containerd[1455]: time="2025-05-10T00:07:53.778282562Z" level=info msg="Forcibly stopping sandbox \"bbdc02b92325d3eaa887dc6c4a0b35702fe8ec0e0c9d4ff1a6cb0c619f3dbad6\"" May 10 00:07:53.778385 containerd[1455]: time="2025-05-10T00:07:53.778355922Z" level=info msg="TearDown network for sandbox \"bbdc02b92325d3eaa887dc6c4a0b35702fe8ec0e0c9d4ff1a6cb0c619f3dbad6\" successfully" May 10 00:07:53.782246 containerd[1455]: time="2025-05-10T00:07:53.782099443Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"bbdc02b92325d3eaa887dc6c4a0b35702fe8ec0e0c9d4ff1a6cb0c619f3dbad6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 10 00:07:53.782246 containerd[1455]: time="2025-05-10T00:07:53.782181404Z" level=info msg="RemovePodSandbox \"bbdc02b92325d3eaa887dc6c4a0b35702fe8ec0e0c9d4ff1a6cb0c619f3dbad6\" returns successfully" May 10 00:07:53.783413 containerd[1455]: time="2025-05-10T00:07:53.783263135Z" level=info msg="StopPodSandbox for \"b8915781d76c7e4bed54a6320a2ec6c1e2689e181c08dce871146a3ef92a58c5\"" May 10 00:07:53.783515 containerd[1455]: time="2025-05-10T00:07:53.783420057Z" level=info msg="TearDown network for sandbox \"b8915781d76c7e4bed54a6320a2ec6c1e2689e181c08dce871146a3ef92a58c5\" successfully" May 10 00:07:53.783515 containerd[1455]: time="2025-05-10T00:07:53.783431737Z" level=info msg="StopPodSandbox for \"b8915781d76c7e4bed54a6320a2ec6c1e2689e181c08dce871146a3ef92a58c5\" returns successfully" May 10 00:07:53.784248 containerd[1455]: time="2025-05-10T00:07:53.783901182Z" level=info msg="RemovePodSandbox for \"b8915781d76c7e4bed54a6320a2ec6c1e2689e181c08dce871146a3ef92a58c5\"" May 10 00:07:53.784248 containerd[1455]: time="2025-05-10T00:07:53.783931063Z" level=info msg="Forcibly stopping sandbox \"b8915781d76c7e4bed54a6320a2ec6c1e2689e181c08dce871146a3ef92a58c5\"" May 10 00:07:53.784248 containerd[1455]: time="2025-05-10T00:07:53.783995943Z" level=info msg="TearDown network for sandbox \"b8915781d76c7e4bed54a6320a2ec6c1e2689e181c08dce871146a3ef92a58c5\" successfully" May 10 00:07:53.793100 containerd[1455]: time="2025-05-10T00:07:53.793057921Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b8915781d76c7e4bed54a6320a2ec6c1e2689e181c08dce871146a3ef92a58c5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 10 00:07:53.793361 containerd[1455]: time="2025-05-10T00:07:53.793278444Z" level=info msg="RemovePodSandbox \"b8915781d76c7e4bed54a6320a2ec6c1e2689e181c08dce871146a3ef92a58c5\" returns successfully" May 10 00:07:53.794017 containerd[1455]: time="2025-05-10T00:07:53.793835970Z" level=info msg="StopPodSandbox for \"c7f4833db2793d7a6f2d5af34885807a8151ca7d4eac5a79d35a0b094c420867\"" May 10 00:07:53.794017 containerd[1455]: time="2025-05-10T00:07:53.793948451Z" level=info msg="TearDown network for sandbox \"c7f4833db2793d7a6f2d5af34885807a8151ca7d4eac5a79d35a0b094c420867\" successfully" May 10 00:07:53.794017 containerd[1455]: time="2025-05-10T00:07:53.793959851Z" level=info msg="StopPodSandbox for \"c7f4833db2793d7a6f2d5af34885807a8151ca7d4eac5a79d35a0b094c420867\" returns successfully" May 10 00:07:53.794544 containerd[1455]: time="2025-05-10T00:07:53.794379695Z" level=info msg="RemovePodSandbox for \"c7f4833db2793d7a6f2d5af34885807a8151ca7d4eac5a79d35a0b094c420867\"" May 10 00:07:53.794544 containerd[1455]: time="2025-05-10T00:07:53.794405336Z" level=info msg="Forcibly stopping sandbox \"c7f4833db2793d7a6f2d5af34885807a8151ca7d4eac5a79d35a0b094c420867\"" May 10 00:07:53.795766 containerd[1455]: time="2025-05-10T00:07:53.794463016Z" level=info msg="TearDown network for sandbox \"c7f4833db2793d7a6f2d5af34885807a8151ca7d4eac5a79d35a0b094c420867\" successfully" May 10 00:07:53.797140 containerd[1455]: time="2025-05-10T00:07:53.797101365Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c7f4833db2793d7a6f2d5af34885807a8151ca7d4eac5a79d35a0b094c420867\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 10 00:07:53.797221 containerd[1455]: time="2025-05-10T00:07:53.797163485Z" level=info msg="RemovePodSandbox \"c7f4833db2793d7a6f2d5af34885807a8151ca7d4eac5a79d35a0b094c420867\" returns successfully" May 10 00:07:53.797451 containerd[1455]: time="2025-05-10T00:07:53.797432808Z" level=info msg="StopPodSandbox for \"7c08b744992d50b0eb48bcaa61f1f37e2045d4b438d2967366eb5e31ffd433e6\"" May 10 00:07:53.797572 containerd[1455]: time="2025-05-10T00:07:53.797513809Z" level=info msg="TearDown network for sandbox \"7c08b744992d50b0eb48bcaa61f1f37e2045d4b438d2967366eb5e31ffd433e6\" successfully" May 10 00:07:53.797572 containerd[1455]: time="2025-05-10T00:07:53.797530089Z" level=info msg="StopPodSandbox for \"7c08b744992d50b0eb48bcaa61f1f37e2045d4b438d2967366eb5e31ffd433e6\" returns successfully" May 10 00:07:53.798893 containerd[1455]: time="2025-05-10T00:07:53.797974854Z" level=info msg="RemovePodSandbox for \"7c08b744992d50b0eb48bcaa61f1f37e2045d4b438d2967366eb5e31ffd433e6\"" May 10 00:07:53.798893 containerd[1455]: time="2025-05-10T00:07:53.798136136Z" level=info msg="Forcibly stopping sandbox \"7c08b744992d50b0eb48bcaa61f1f37e2045d4b438d2967366eb5e31ffd433e6\"" May 10 00:07:53.798893 containerd[1455]: time="2025-05-10T00:07:53.798216737Z" level=info msg="TearDown network for sandbox \"7c08b744992d50b0eb48bcaa61f1f37e2045d4b438d2967366eb5e31ffd433e6\" successfully" May 10 00:07:53.800733 containerd[1455]: time="2025-05-10T00:07:53.800690484Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7c08b744992d50b0eb48bcaa61f1f37e2045d4b438d2967366eb5e31ffd433e6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 10 00:07:53.800835 containerd[1455]: time="2025-05-10T00:07:53.800756924Z" level=info msg="RemovePodSandbox \"7c08b744992d50b0eb48bcaa61f1f37e2045d4b438d2967366eb5e31ffd433e6\" returns successfully" May 10 00:07:53.801362 containerd[1455]: time="2025-05-10T00:07:53.801229009Z" level=info msg="StopPodSandbox for \"dec5dfa24dcd88a2cc1d3fb9acb23f7fa1da4cfaf9e23a890db6b276c61ff526\"" May 10 00:07:53.801515 containerd[1455]: time="2025-05-10T00:07:53.801322570Z" level=info msg="TearDown network for sandbox \"dec5dfa24dcd88a2cc1d3fb9acb23f7fa1da4cfaf9e23a890db6b276c61ff526\" successfully" May 10 00:07:53.801515 containerd[1455]: time="2025-05-10T00:07:53.801460292Z" level=info msg="StopPodSandbox for \"dec5dfa24dcd88a2cc1d3fb9acb23f7fa1da4cfaf9e23a890db6b276c61ff526\" returns successfully" May 10 00:07:53.801866 containerd[1455]: time="2025-05-10T00:07:53.801829336Z" level=info msg="RemovePodSandbox for \"dec5dfa24dcd88a2cc1d3fb9acb23f7fa1da4cfaf9e23a890db6b276c61ff526\"" May 10 00:07:53.801866 containerd[1455]: time="2025-05-10T00:07:53.801862056Z" level=info msg="Forcibly stopping sandbox \"dec5dfa24dcd88a2cc1d3fb9acb23f7fa1da4cfaf9e23a890db6b276c61ff526\"" May 10 00:07:53.801985 containerd[1455]: time="2025-05-10T00:07:53.801920737Z" level=info msg="TearDown network for sandbox \"dec5dfa24dcd88a2cc1d3fb9acb23f7fa1da4cfaf9e23a890db6b276c61ff526\" successfully" May 10 00:07:53.804377 containerd[1455]: time="2025-05-10T00:07:53.804342003Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"dec5dfa24dcd88a2cc1d3fb9acb23f7fa1da4cfaf9e23a890db6b276c61ff526\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 10 00:07:53.804442 containerd[1455]: time="2025-05-10T00:07:53.804408204Z" level=info msg="RemovePodSandbox \"dec5dfa24dcd88a2cc1d3fb9acb23f7fa1da4cfaf9e23a890db6b276c61ff526\" returns successfully" May 10 00:07:53.805003 containerd[1455]: time="2025-05-10T00:07:53.804767128Z" level=info msg="StopPodSandbox for \"f86a0e3b112b07a79058fb2b35d8a86ac28a6700bb9fd7b39024c80c34de9db5\"" May 10 00:07:53.805003 containerd[1455]: time="2025-05-10T00:07:53.804902409Z" level=info msg="TearDown network for sandbox \"f86a0e3b112b07a79058fb2b35d8a86ac28a6700bb9fd7b39024c80c34de9db5\" successfully" May 10 00:07:53.805003 containerd[1455]: time="2025-05-10T00:07:53.804950570Z" level=info msg="StopPodSandbox for \"f86a0e3b112b07a79058fb2b35d8a86ac28a6700bb9fd7b39024c80c34de9db5\" returns successfully" May 10 00:07:53.805429 containerd[1455]: time="2025-05-10T00:07:53.805397214Z" level=info msg="RemovePodSandbox for \"f86a0e3b112b07a79058fb2b35d8a86ac28a6700bb9fd7b39024c80c34de9db5\"" May 10 00:07:53.805468 containerd[1455]: time="2025-05-10T00:07:53.805435415Z" level=info msg="Forcibly stopping sandbox \"f86a0e3b112b07a79058fb2b35d8a86ac28a6700bb9fd7b39024c80c34de9db5\"" May 10 00:07:53.805525 containerd[1455]: time="2025-05-10T00:07:53.805509736Z" level=info msg="TearDown network for sandbox \"f86a0e3b112b07a79058fb2b35d8a86ac28a6700bb9fd7b39024c80c34de9db5\" successfully" May 10 00:07:53.808275 containerd[1455]: time="2025-05-10T00:07:53.808237045Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f86a0e3b112b07a79058fb2b35d8a86ac28a6700bb9fd7b39024c80c34de9db5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 10 00:07:53.808340 containerd[1455]: time="2025-05-10T00:07:53.808307206Z" level=info msg="RemovePodSandbox \"f86a0e3b112b07a79058fb2b35d8a86ac28a6700bb9fd7b39024c80c34de9db5\" returns successfully" May 10 00:07:53.808959 containerd[1455]: time="2025-05-10T00:07:53.808936453Z" level=info msg="StopPodSandbox for \"9b9b5a784923698865b82f134ee4f04dece9c7e4b3a822a09639506a16350baf\"" May 10 00:07:53.809057 containerd[1455]: time="2025-05-10T00:07:53.809041734Z" level=info msg="TearDown network for sandbox \"9b9b5a784923698865b82f134ee4f04dece9c7e4b3a822a09639506a16350baf\" successfully" May 10 00:07:53.809085 containerd[1455]: time="2025-05-10T00:07:53.809056174Z" level=info msg="StopPodSandbox for \"9b9b5a784923698865b82f134ee4f04dece9c7e4b3a822a09639506a16350baf\" returns successfully" May 10 00:07:53.809456 containerd[1455]: time="2025-05-10T00:07:53.809430018Z" level=info msg="RemovePodSandbox for \"9b9b5a784923698865b82f134ee4f04dece9c7e4b3a822a09639506a16350baf\"" May 10 00:07:53.809509 containerd[1455]: time="2025-05-10T00:07:53.809491939Z" level=info msg="Forcibly stopping sandbox \"9b9b5a784923698865b82f134ee4f04dece9c7e4b3a822a09639506a16350baf\"" May 10 00:07:53.809598 containerd[1455]: time="2025-05-10T00:07:53.809580380Z" level=info msg="TearDown network for sandbox \"9b9b5a784923698865b82f134ee4f04dece9c7e4b3a822a09639506a16350baf\" successfully" May 10 00:07:53.813830 containerd[1455]: time="2025-05-10T00:07:53.813781745Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9b9b5a784923698865b82f134ee4f04dece9c7e4b3a822a09639506a16350baf\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 10 00:07:53.813945 containerd[1455]: time="2025-05-10T00:07:53.813879226Z" level=info msg="RemovePodSandbox \"9b9b5a784923698865b82f134ee4f04dece9c7e4b3a822a09639506a16350baf\" returns successfully" May 10 00:07:53.814309 containerd[1455]: time="2025-05-10T00:07:53.814282230Z" level=info msg="StopPodSandbox for \"c10892becf096b3e637e8c8e3e15e660accd9f96cd370a33a82ed9e25392f545\"" May 10 00:07:53.814401 containerd[1455]: time="2025-05-10T00:07:53.814384951Z" level=info msg="TearDown network for sandbox \"c10892becf096b3e637e8c8e3e15e660accd9f96cd370a33a82ed9e25392f545\" successfully" May 10 00:07:53.814435 containerd[1455]: time="2025-05-10T00:07:53.814400872Z" level=info msg="StopPodSandbox for \"c10892becf096b3e637e8c8e3e15e660accd9f96cd370a33a82ed9e25392f545\" returns successfully" May 10 00:07:53.815091 containerd[1455]: time="2025-05-10T00:07:53.815065839Z" level=info msg="RemovePodSandbox for \"c10892becf096b3e637e8c8e3e15e660accd9f96cd370a33a82ed9e25392f545\"" May 10 00:07:53.815148 containerd[1455]: time="2025-05-10T00:07:53.815096959Z" level=info msg="Forcibly stopping sandbox \"c10892becf096b3e637e8c8e3e15e660accd9f96cd370a33a82ed9e25392f545\"" May 10 00:07:53.815175 containerd[1455]: time="2025-05-10T00:07:53.815158160Z" level=info msg="TearDown network for sandbox \"c10892becf096b3e637e8c8e3e15e660accd9f96cd370a33a82ed9e25392f545\" successfully" May 10 00:07:53.820025 containerd[1455]: time="2025-05-10T00:07:53.819968452Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c10892becf096b3e637e8c8e3e15e660accd9f96cd370a33a82ed9e25392f545\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 10 00:07:53.820109 containerd[1455]: time="2025-05-10T00:07:53.820040892Z" level=info msg="RemovePodSandbox \"c10892becf096b3e637e8c8e3e15e660accd9f96cd370a33a82ed9e25392f545\" returns successfully" May 10 00:07:53.820711 containerd[1455]: time="2025-05-10T00:07:53.820665659Z" level=info msg="StopPodSandbox for \"d8e20ff03e6b87cc110d5ef53e8031281a309ea47a8f40733ba643d34a2ee36b\"" May 10 00:07:53.820784 containerd[1455]: time="2025-05-10T00:07:53.820770100Z" level=info msg="TearDown network for sandbox \"d8e20ff03e6b87cc110d5ef53e8031281a309ea47a8f40733ba643d34a2ee36b\" successfully" May 10 00:07:53.820808 containerd[1455]: time="2025-05-10T00:07:53.820784021Z" level=info msg="StopPodSandbox for \"d8e20ff03e6b87cc110d5ef53e8031281a309ea47a8f40733ba643d34a2ee36b\" returns successfully" May 10 00:07:53.821680 containerd[1455]: time="2025-05-10T00:07:53.821645830Z" level=info msg="RemovePodSandbox for \"d8e20ff03e6b87cc110d5ef53e8031281a309ea47a8f40733ba643d34a2ee36b\"" May 10 00:07:53.821728 containerd[1455]: time="2025-05-10T00:07:53.821686230Z" level=info msg="Forcibly stopping sandbox \"d8e20ff03e6b87cc110d5ef53e8031281a309ea47a8f40733ba643d34a2ee36b\"" May 10 00:07:53.821768 containerd[1455]: time="2025-05-10T00:07:53.821753191Z" level=info msg="TearDown network for sandbox \"d8e20ff03e6b87cc110d5ef53e8031281a309ea47a8f40733ba643d34a2ee36b\" successfully" May 10 00:07:53.824602 containerd[1455]: time="2025-05-10T00:07:53.824569141Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d8e20ff03e6b87cc110d5ef53e8031281a309ea47a8f40733ba643d34a2ee36b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 10 00:07:53.824683 containerd[1455]: time="2025-05-10T00:07:53.824635222Z" level=info msg="RemovePodSandbox \"d8e20ff03e6b87cc110d5ef53e8031281a309ea47a8f40733ba643d34a2ee36b\" returns successfully" May 10 00:07:53.826248 containerd[1455]: time="2025-05-10T00:07:53.826167079Z" level=info msg="StopPodSandbox for \"a6b674e652fda6143482986bbb98cab22b4708195d47f2f09d4d689271b239d8\"" May 10 00:07:53.826353 containerd[1455]: time="2025-05-10T00:07:53.826272040Z" level=info msg="TearDown network for sandbox \"a6b674e652fda6143482986bbb98cab22b4708195d47f2f09d4d689271b239d8\" successfully" May 10 00:07:53.826353 containerd[1455]: time="2025-05-10T00:07:53.826283160Z" level=info msg="StopPodSandbox for \"a6b674e652fda6143482986bbb98cab22b4708195d47f2f09d4d689271b239d8\" returns successfully" May 10 00:07:53.826591 containerd[1455]: time="2025-05-10T00:07:53.826556403Z" level=info msg="RemovePodSandbox for \"a6b674e652fda6143482986bbb98cab22b4708195d47f2f09d4d689271b239d8\"" May 10 00:07:53.826591 containerd[1455]: time="2025-05-10T00:07:53.826587603Z" level=info msg="Forcibly stopping sandbox \"a6b674e652fda6143482986bbb98cab22b4708195d47f2f09d4d689271b239d8\"" May 10 00:07:53.826667 containerd[1455]: time="2025-05-10T00:07:53.826653284Z" level=info msg="TearDown network for sandbox \"a6b674e652fda6143482986bbb98cab22b4708195d47f2f09d4d689271b239d8\" successfully" May 10 00:07:53.829304 containerd[1455]: time="2025-05-10T00:07:53.829267992Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a6b674e652fda6143482986bbb98cab22b4708195d47f2f09d4d689271b239d8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 10 00:07:53.829373 containerd[1455]: time="2025-05-10T00:07:53.829337833Z" level=info msg="RemovePodSandbox \"a6b674e652fda6143482986bbb98cab22b4708195d47f2f09d4d689271b239d8\" returns successfully" May 10 00:07:53.829733 containerd[1455]: time="2025-05-10T00:07:53.829711117Z" level=info msg="StopPodSandbox for \"5ee17de608faef1c8f0040997cbdbb8a3f2ec849e89150d359d947ec5c3ee46d\"" May 10 00:07:53.829819 containerd[1455]: time="2025-05-10T00:07:53.829804118Z" level=info msg="TearDown network for sandbox \"5ee17de608faef1c8f0040997cbdbb8a3f2ec849e89150d359d947ec5c3ee46d\" successfully" May 10 00:07:53.829857 containerd[1455]: time="2025-05-10T00:07:53.829817678Z" level=info msg="StopPodSandbox for \"5ee17de608faef1c8f0040997cbdbb8a3f2ec849e89150d359d947ec5c3ee46d\" returns successfully" May 10 00:07:53.830392 containerd[1455]: time="2025-05-10T00:07:53.830361364Z" level=info msg="RemovePodSandbox for \"5ee17de608faef1c8f0040997cbdbb8a3f2ec849e89150d359d947ec5c3ee46d\"" May 10 00:07:53.830430 containerd[1455]: time="2025-05-10T00:07:53.830416204Z" level=info msg="Forcibly stopping sandbox \"5ee17de608faef1c8f0040997cbdbb8a3f2ec849e89150d359d947ec5c3ee46d\"" May 10 00:07:53.830513 containerd[1455]: time="2025-05-10T00:07:53.830497445Z" level=info msg="TearDown network for sandbox \"5ee17de608faef1c8f0040997cbdbb8a3f2ec849e89150d359d947ec5c3ee46d\" successfully" May 10 00:07:53.833928 containerd[1455]: time="2025-05-10T00:07:53.833887642Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5ee17de608faef1c8f0040997cbdbb8a3f2ec849e89150d359d947ec5c3ee46d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 10 00:07:53.834006 containerd[1455]: time="2025-05-10T00:07:53.833960843Z" level=info msg="RemovePodSandbox \"5ee17de608faef1c8f0040997cbdbb8a3f2ec849e89150d359d947ec5c3ee46d\" returns successfully" May 10 00:07:53.834352 containerd[1455]: time="2025-05-10T00:07:53.834331087Z" level=info msg="StopPodSandbox for \"91b203a3ea9387e43e18d103f3481a84640baa460cfb6d17599a8f65d8a29670\"" May 10 00:07:53.834455 containerd[1455]: time="2025-05-10T00:07:53.834440648Z" level=info msg="TearDown network for sandbox \"91b203a3ea9387e43e18d103f3481a84640baa460cfb6d17599a8f65d8a29670\" successfully" May 10 00:07:53.834487 containerd[1455]: time="2025-05-10T00:07:53.834455048Z" level=info msg="StopPodSandbox for \"91b203a3ea9387e43e18d103f3481a84640baa460cfb6d17599a8f65d8a29670\" returns successfully" May 10 00:07:53.834852 containerd[1455]: time="2025-05-10T00:07:53.834821452Z" level=info msg="RemovePodSandbox for \"91b203a3ea9387e43e18d103f3481a84640baa460cfb6d17599a8f65d8a29670\"" May 10 00:07:53.834910 containerd[1455]: time="2025-05-10T00:07:53.834859132Z" level=info msg="Forcibly stopping sandbox \"91b203a3ea9387e43e18d103f3481a84640baa460cfb6d17599a8f65d8a29670\"" May 10 00:07:53.834937 containerd[1455]: time="2025-05-10T00:07:53.834917093Z" level=info msg="TearDown network for sandbox \"91b203a3ea9387e43e18d103f3481a84640baa460cfb6d17599a8f65d8a29670\" successfully" May 10 00:07:53.837477 containerd[1455]: time="2025-05-10T00:07:53.837441400Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"91b203a3ea9387e43e18d103f3481a84640baa460cfb6d17599a8f65d8a29670\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 10 00:07:53.837561 containerd[1455]: time="2025-05-10T00:07:53.837501641Z" level=info msg="RemovePodSandbox \"91b203a3ea9387e43e18d103f3481a84640baa460cfb6d17599a8f65d8a29670\" returns successfully" May 10 00:07:53.837913 containerd[1455]: time="2025-05-10T00:07:53.837893445Z" level=info msg="StopPodSandbox for \"e4ce826178bb2d8ffada47f1964be8acc17d8dddbf917d8a0037110976431a65\"" May 10 00:07:53.838007 containerd[1455]: time="2025-05-10T00:07:53.837994366Z" level=info msg="TearDown network for sandbox \"e4ce826178bb2d8ffada47f1964be8acc17d8dddbf917d8a0037110976431a65\" successfully" May 10 00:07:53.838038 containerd[1455]: time="2025-05-10T00:07:53.838008966Z" level=info msg="StopPodSandbox for \"e4ce826178bb2d8ffada47f1964be8acc17d8dddbf917d8a0037110976431a65\" returns successfully" May 10 00:07:53.839292 containerd[1455]: time="2025-05-10T00:07:53.839242460Z" level=info msg="RemovePodSandbox for \"e4ce826178bb2d8ffada47f1964be8acc17d8dddbf917d8a0037110976431a65\"" May 10 00:07:53.839292 containerd[1455]: time="2025-05-10T00:07:53.839275140Z" level=info msg="Forcibly stopping sandbox \"e4ce826178bb2d8ffada47f1964be8acc17d8dddbf917d8a0037110976431a65\"" May 10 00:07:53.839357 containerd[1455]: time="2025-05-10T00:07:53.839338021Z" level=info msg="TearDown network for sandbox \"e4ce826178bb2d8ffada47f1964be8acc17d8dddbf917d8a0037110976431a65\" successfully" May 10 00:07:53.842986 containerd[1455]: time="2025-05-10T00:07:53.841886248Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e4ce826178bb2d8ffada47f1964be8acc17d8dddbf917d8a0037110976431a65\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 10 00:07:53.842986 containerd[1455]: time="2025-05-10T00:07:53.841944729Z" level=info msg="RemovePodSandbox \"e4ce826178bb2d8ffada47f1964be8acc17d8dddbf917d8a0037110976431a65\" returns successfully" May 10 00:07:53.842986 containerd[1455]: time="2025-05-10T00:07:53.842292213Z" level=info msg="StopPodSandbox for \"8375f65da33d850ff54f70b94cd7018dd499e5b3c4ef8c1d854bbe92b59ccdf4\"" May 10 00:07:53.842986 containerd[1455]: time="2025-05-10T00:07:53.842386934Z" level=info msg="TearDown network for sandbox \"8375f65da33d850ff54f70b94cd7018dd499e5b3c4ef8c1d854bbe92b59ccdf4\" successfully" May 10 00:07:53.842986 containerd[1455]: time="2025-05-10T00:07:53.842397414Z" level=info msg="StopPodSandbox for \"8375f65da33d850ff54f70b94cd7018dd499e5b3c4ef8c1d854bbe92b59ccdf4\" returns successfully" May 10 00:07:53.842986 containerd[1455]: time="2025-05-10T00:07:53.842620696Z" level=info msg="RemovePodSandbox for \"8375f65da33d850ff54f70b94cd7018dd499e5b3c4ef8c1d854bbe92b59ccdf4\"" May 10 00:07:53.842986 containerd[1455]: time="2025-05-10T00:07:53.842642616Z" level=info msg="Forcibly stopping sandbox \"8375f65da33d850ff54f70b94cd7018dd499e5b3c4ef8c1d854bbe92b59ccdf4\"" May 10 00:07:53.842986 containerd[1455]: time="2025-05-10T00:07:53.842714857Z" level=info msg="TearDown network for sandbox \"8375f65da33d850ff54f70b94cd7018dd499e5b3c4ef8c1d854bbe92b59ccdf4\" successfully" May 10 00:07:53.845430 containerd[1455]: time="2025-05-10T00:07:53.845393086Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8375f65da33d850ff54f70b94cd7018dd499e5b3c4ef8c1d854bbe92b59ccdf4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 10 00:07:53.845502 containerd[1455]: time="2025-05-10T00:07:53.845453567Z" level=info msg="RemovePodSandbox \"8375f65da33d850ff54f70b94cd7018dd499e5b3c4ef8c1d854bbe92b59ccdf4\" returns successfully" May 10 00:07:53.845857 containerd[1455]: time="2025-05-10T00:07:53.845826691Z" level=info msg="StopPodSandbox for \"debec55208a2cc9bc249cb92a2a39ba8daf6d29ff9afee61a0d7be1d21026085\"" May 10 00:07:53.845941 containerd[1455]: time="2025-05-10T00:07:53.845927452Z" level=info msg="TearDown network for sandbox \"debec55208a2cc9bc249cb92a2a39ba8daf6d29ff9afee61a0d7be1d21026085\" successfully" May 10 00:07:53.845985 containerd[1455]: time="2025-05-10T00:07:53.845940412Z" level=info msg="StopPodSandbox for \"debec55208a2cc9bc249cb92a2a39ba8daf6d29ff9afee61a0d7be1d21026085\" returns successfully" May 10 00:07:53.846310 containerd[1455]: time="2025-05-10T00:07:53.846288376Z" level=info msg="RemovePodSandbox for \"debec55208a2cc9bc249cb92a2a39ba8daf6d29ff9afee61a0d7be1d21026085\"" May 10 00:07:53.846353 containerd[1455]: time="2025-05-10T00:07:53.846318136Z" level=info msg="Forcibly stopping sandbox \"debec55208a2cc9bc249cb92a2a39ba8daf6d29ff9afee61a0d7be1d21026085\"" May 10 00:07:53.846400 containerd[1455]: time="2025-05-10T00:07:53.846385657Z" level=info msg="TearDown network for sandbox \"debec55208a2cc9bc249cb92a2a39ba8daf6d29ff9afee61a0d7be1d21026085\" successfully" May 10 00:07:53.849343 containerd[1455]: time="2025-05-10T00:07:53.849300568Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"debec55208a2cc9bc249cb92a2a39ba8daf6d29ff9afee61a0d7be1d21026085\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 10 00:07:53.849471 containerd[1455]: time="2025-05-10T00:07:53.849370849Z" level=info msg="RemovePodSandbox \"debec55208a2cc9bc249cb92a2a39ba8daf6d29ff9afee61a0d7be1d21026085\" returns successfully" May 10 00:07:53.850240 containerd[1455]: time="2025-05-10T00:07:53.850069457Z" level=info msg="StopPodSandbox for \"d17345d95100bf841394fddf6cec94d44cb83645858af93608efd0d4907222a4\"" May 10 00:07:53.850240 containerd[1455]: time="2025-05-10T00:07:53.850179338Z" level=info msg="TearDown network for sandbox \"d17345d95100bf841394fddf6cec94d44cb83645858af93608efd0d4907222a4\" successfully" May 10 00:07:53.850240 containerd[1455]: time="2025-05-10T00:07:53.850191338Z" level=info msg="StopPodSandbox for \"d17345d95100bf841394fddf6cec94d44cb83645858af93608efd0d4907222a4\" returns successfully" May 10 00:07:53.852008 containerd[1455]: time="2025-05-10T00:07:53.851972237Z" level=info msg="RemovePodSandbox for \"d17345d95100bf841394fddf6cec94d44cb83645858af93608efd0d4907222a4\"" May 10 00:07:53.852050 containerd[1455]: time="2025-05-10T00:07:53.852012158Z" level=info msg="Forcibly stopping sandbox \"d17345d95100bf841394fddf6cec94d44cb83645858af93608efd0d4907222a4\"" May 10 00:07:53.852107 containerd[1455]: time="2025-05-10T00:07:53.852088198Z" level=info msg="TearDown network for sandbox \"d17345d95100bf841394fddf6cec94d44cb83645858af93608efd0d4907222a4\" successfully" May 10 00:07:53.855333 containerd[1455]: time="2025-05-10T00:07:53.855272713Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d17345d95100bf841394fddf6cec94d44cb83645858af93608efd0d4907222a4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 10 00:07:53.855418 containerd[1455]: time="2025-05-10T00:07:53.855400634Z" level=info msg="RemovePodSandbox \"d17345d95100bf841394fddf6cec94d44cb83645858af93608efd0d4907222a4\" returns successfully" May 10 00:07:53.855847 containerd[1455]: time="2025-05-10T00:07:53.855805839Z" level=info msg="StopPodSandbox for \"921c8599351f2d8869fc9e2febf2d37494d7807cf2a91d98402fc75b09f82487\"" May 10 00:07:53.855917 containerd[1455]: time="2025-05-10T00:07:53.855903160Z" level=info msg="TearDown network for sandbox \"921c8599351f2d8869fc9e2febf2d37494d7807cf2a91d98402fc75b09f82487\" successfully" May 10 00:07:53.855952 containerd[1455]: time="2025-05-10T00:07:53.855916960Z" level=info msg="StopPodSandbox for \"921c8599351f2d8869fc9e2febf2d37494d7807cf2a91d98402fc75b09f82487\" returns successfully" May 10 00:07:53.856803 containerd[1455]: time="2025-05-10T00:07:53.856771609Z" level=info msg="RemovePodSandbox for \"921c8599351f2d8869fc9e2febf2d37494d7807cf2a91d98402fc75b09f82487\"" May 10 00:07:53.856851 containerd[1455]: time="2025-05-10T00:07:53.856811249Z" level=info msg="Forcibly stopping sandbox \"921c8599351f2d8869fc9e2febf2d37494d7807cf2a91d98402fc75b09f82487\"" May 10 00:07:53.856897 containerd[1455]: time="2025-05-10T00:07:53.856883730Z" level=info msg="TearDown network for sandbox \"921c8599351f2d8869fc9e2febf2d37494d7807cf2a91d98402fc75b09f82487\" successfully" May 10 00:07:53.861020 containerd[1455]: time="2025-05-10T00:07:53.860980774Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"921c8599351f2d8869fc9e2febf2d37494d7807cf2a91d98402fc75b09f82487\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 10 00:07:53.861089 containerd[1455]: time="2025-05-10T00:07:53.861047975Z" level=info msg="RemovePodSandbox \"921c8599351f2d8869fc9e2febf2d37494d7807cf2a91d98402fc75b09f82487\" returns successfully" May 10 00:07:53.861440 containerd[1455]: time="2025-05-10T00:07:53.861419979Z" level=info msg="StopPodSandbox for \"26134fe4490181fb0fe5e82b431ffdd239e1bc587066208e5e65c9b1b6e1d435\"" May 10 00:07:53.861516 containerd[1455]: time="2025-05-10T00:07:53.861502460Z" level=info msg="TearDown network for sandbox \"26134fe4490181fb0fe5e82b431ffdd239e1bc587066208e5e65c9b1b6e1d435\" successfully" May 10 00:07:53.861554 containerd[1455]: time="2025-05-10T00:07:53.861516140Z" level=info msg="StopPodSandbox for \"26134fe4490181fb0fe5e82b431ffdd239e1bc587066208e5e65c9b1b6e1d435\" returns successfully" May 10 00:07:53.862074 containerd[1455]: time="2025-05-10T00:07:53.862052506Z" level=info msg="RemovePodSandbox for \"26134fe4490181fb0fe5e82b431ffdd239e1bc587066208e5e65c9b1b6e1d435\"" May 10 00:07:53.862129 containerd[1455]: time="2025-05-10T00:07:53.862076746Z" level=info msg="Forcibly stopping sandbox \"26134fe4490181fb0fe5e82b431ffdd239e1bc587066208e5e65c9b1b6e1d435\"" May 10 00:07:53.862166 containerd[1455]: time="2025-05-10T00:07:53.862152667Z" level=info msg="TearDown network for sandbox \"26134fe4490181fb0fe5e82b431ffdd239e1bc587066208e5e65c9b1b6e1d435\" successfully" May 10 00:07:53.865911 containerd[1455]: time="2025-05-10T00:07:53.865857987Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"26134fe4490181fb0fe5e82b431ffdd239e1bc587066208e5e65c9b1b6e1d435\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 10 00:07:53.865987 containerd[1455]: time="2025-05-10T00:07:53.865922228Z" level=info msg="RemovePodSandbox \"26134fe4490181fb0fe5e82b431ffdd239e1bc587066208e5e65c9b1b6e1d435\" returns successfully" May 10 00:07:53.866474 containerd[1455]: time="2025-05-10T00:07:53.866435153Z" level=info msg="StopPodSandbox for \"6248d7d84def9df43cb3f82f6e35f960db2ce70adea2023b1a723767d45b9a28\"" May 10 00:07:53.866537 containerd[1455]: time="2025-05-10T00:07:53.866517554Z" level=info msg="TearDown network for sandbox \"6248d7d84def9df43cb3f82f6e35f960db2ce70adea2023b1a723767d45b9a28\" successfully" May 10 00:07:53.866537 containerd[1455]: time="2025-05-10T00:07:53.866527594Z" level=info msg="StopPodSandbox for \"6248d7d84def9df43cb3f82f6e35f960db2ce70adea2023b1a723767d45b9a28\" returns successfully" May 10 00:07:53.866880 containerd[1455]: time="2025-05-10T00:07:53.866735397Z" level=info msg="RemovePodSandbox for \"6248d7d84def9df43cb3f82f6e35f960db2ce70adea2023b1a723767d45b9a28\"" May 10 00:07:53.866926 containerd[1455]: time="2025-05-10T00:07:53.866885038Z" level=info msg="Forcibly stopping sandbox \"6248d7d84def9df43cb3f82f6e35f960db2ce70adea2023b1a723767d45b9a28\"" May 10 00:07:53.866988 containerd[1455]: time="2025-05-10T00:07:53.866971839Z" level=info msg="TearDown network for sandbox \"6248d7d84def9df43cb3f82f6e35f960db2ce70adea2023b1a723767d45b9a28\" successfully" May 10 00:07:53.870030 containerd[1455]: time="2025-05-10T00:07:53.869998192Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6248d7d84def9df43cb3f82f6e35f960db2ce70adea2023b1a723767d45b9a28\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 10 00:07:53.870111 containerd[1455]: time="2025-05-10T00:07:53.870057032Z" level=info msg="RemovePodSandbox \"6248d7d84def9df43cb3f82f6e35f960db2ce70adea2023b1a723767d45b9a28\" returns successfully" May 10 00:07:57.210358 systemd[1]: Started sshd@19-10.0.0.141:22-10.0.0.1:55822.service - OpenSSH per-connection server daemon (10.0.0.1:55822). May 10 00:07:57.249718 sshd[5772]: Accepted publickey for core from 10.0.0.1 port 55822 ssh2: RSA SHA256:Q9AEy6fzrZ3SovUIgabQR390giiaQRil/rHIVS1a70c May 10 00:07:57.250985 sshd-session[5772]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:07:57.254909 systemd-logind[1429]: New session 20 of user core. May 10 00:07:57.262009 systemd[1]: Started session-20.scope - Session 20 of User core. May 10 00:07:57.386220 sshd[5774]: Connection closed by 10.0.0.1 port 55822 May 10 00:07:57.386694 sshd-session[5772]: pam_unix(sshd:session): session closed for user core May 10 00:07:57.390025 systemd-logind[1429]: Session 20 logged out. Waiting for processes to exit. May 10 00:07:57.390185 systemd[1]: sshd@19-10.0.0.141:22-10.0.0.1:55822.service: Deactivated successfully. May 10 00:07:57.392024 systemd[1]: session-20.scope: Deactivated successfully. May 10 00:07:57.394568 systemd-logind[1429]: Removed session 20.