May 13 00:08:22.899648 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] May 13 00:08:22.899669 kernel: Linux version 6.6.89-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Mon May 12 22:51:32 -00 2025 May 13 00:08:22.899679 kernel: KASLR enabled May 13 00:08:22.899685 kernel: efi: EFI v2.7 by EDK II May 13 00:08:22.899691 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdba86018 ACPI 2.0=0xd9710018 RNG=0xd971e498 MEMRESERVE=0xd9b43d18 May 13 00:08:22.899696 kernel: random: crng init done May 13 00:08:22.899703 kernel: ACPI: Early table checksum verification disabled May 13 00:08:22.899709 kernel: ACPI: RSDP 0x00000000D9710018 000024 (v02 BOCHS ) May 13 00:08:22.899715 kernel: ACPI: XSDT 0x00000000D971FE98 000064 (v01 BOCHS BXPC 00000001 01000013) May 13 00:08:22.899723 kernel: ACPI: FACP 0x00000000D971FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:08:22.899729 kernel: ACPI: DSDT 0x00000000D9717518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:08:22.899735 kernel: ACPI: APIC 0x00000000D971FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:08:22.899741 kernel: ACPI: PPTT 0x00000000D971D898 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:08:22.899747 kernel: ACPI: GTDT 0x00000000D971E818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:08:22.899754 kernel: ACPI: MCFG 0x00000000D971E918 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:08:22.899762 kernel: ACPI: SPCR 0x00000000D971FF98 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:08:22.899768 kernel: ACPI: DBG2 0x00000000D971E418 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:08:22.899775 kernel: ACPI: IORT 0x00000000D971E718 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:08:22.899781 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 May 13 00:08:22.899787 kernel: NUMA: Failed to initialise from firmware May 13 00:08:22.899794 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] May 13 00:08:22.899800 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] May 13 00:08:22.899806 kernel: Zone ranges: May 13 00:08:22.899812 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] May 13 00:08:22.899819 kernel: DMA32 empty May 13 00:08:22.899826 kernel: Normal empty May 13 00:08:22.899832 kernel: Movable zone start for each node May 13 00:08:22.899838 kernel: Early memory node ranges May 13 00:08:22.899845 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] May 13 00:08:22.899851 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] May 13 00:08:22.899857 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] May 13 00:08:22.899863 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] May 13 00:08:22.899870 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] May 13 00:08:22.899876 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] May 13 00:08:22.899882 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] May 13 00:08:22.899889 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] May 13 00:08:22.899895 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges May 13 00:08:22.899903 kernel: psci: probing for conduit method from ACPI. May 13 00:08:22.899909 kernel: psci: PSCIv1.1 detected in firmware. May 13 00:08:22.899916 kernel: psci: Using standard PSCI v0.2 function IDs May 13 00:08:22.899925 kernel: psci: Trusted OS migration not required May 13 00:08:22.899932 kernel: psci: SMC Calling Convention v1.1 May 13 00:08:22.899938 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) May 13 00:08:22.899947 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 May 13 00:08:22.899954 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 May 13 00:08:22.899960 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 May 13 00:08:22.899967 kernel: Detected PIPT I-cache on CPU0 May 13 00:08:22.899974 kernel: CPU features: detected: GIC system register CPU interface May 13 00:08:22.899981 kernel: CPU features: detected: Hardware dirty bit management May 13 00:08:22.899988 kernel: CPU features: detected: Spectre-v4 May 13 00:08:22.899994 kernel: CPU features: detected: Spectre-BHB May 13 00:08:22.900001 kernel: CPU features: kernel page table isolation forced ON by KASLR May 13 00:08:22.900008 kernel: CPU features: detected: Kernel page table isolation (KPTI) May 13 00:08:22.900016 kernel: CPU features: detected: ARM erratum 1418040 May 13 00:08:22.900023 kernel: CPU features: detected: SSBS not fully self-synchronizing May 13 00:08:22.900029 kernel: alternatives: applying boot alternatives May 13 00:08:22.900037 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=c683f9f6a9915f3c14a7bce5c93750f29fcd5cf6eb0774e11e882c5681cc19c0 May 13 00:08:22.900044 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 13 00:08:22.900051 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 13 00:08:22.900057 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 13 00:08:22.900064 kernel: Fallback order for Node 0: 0 May 13 00:08:22.900071 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 May 13 00:08:22.900077 kernel: Policy zone: DMA May 13 00:08:22.900084 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 13 00:08:22.900092 kernel: software IO TLB: area num 4. May 13 00:08:22.900099 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) May 13 00:08:22.900106 kernel: Memory: 2386404K/2572288K available (10304K kernel code, 2186K rwdata, 8104K rodata, 39424K init, 897K bss, 185884K reserved, 0K cma-reserved) May 13 00:08:22.900113 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 13 00:08:22.900119 kernel: rcu: Preemptible hierarchical RCU implementation. May 13 00:08:22.900127 kernel: rcu: RCU event tracing is enabled. May 13 00:08:22.900133 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 13 00:08:22.900140 kernel: Trampoline variant of Tasks RCU enabled. May 13 00:08:22.900147 kernel: Tracing variant of Tasks RCU enabled. May 13 00:08:22.900154 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 13 00:08:22.900160 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 13 00:08:22.900167 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 May 13 00:08:22.900175 kernel: GICv3: 256 SPIs implemented May 13 00:08:22.900182 kernel: GICv3: 0 Extended SPIs implemented May 13 00:08:22.900189 kernel: Root IRQ handler: gic_handle_irq May 13 00:08:22.900195 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI May 13 00:08:22.900202 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 May 13 00:08:22.900208 kernel: ITS [mem 0x08080000-0x0809ffff] May 13 00:08:22.900215 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) May 13 00:08:22.900222 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) May 13 00:08:22.900229 kernel: GICv3: using LPI property table @0x00000000400f0000 May 13 00:08:22.900245 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 May 13 00:08:22.900252 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 13 00:08:22.900270 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 00:08:22.900277 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). May 13 00:08:22.900284 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns May 13 00:08:22.900291 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns May 13 00:08:22.900298 kernel: arm-pv: using stolen time PV May 13 00:08:22.900305 kernel: Console: colour dummy device 80x25 May 13 00:08:22.900312 kernel: ACPI: Core revision 20230628 May 13 00:08:22.900319 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) May 13 00:08:22.900326 kernel: pid_max: default: 32768 minimum: 301 May 13 00:08:22.900333 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 13 00:08:22.900342 kernel: landlock: Up and running. May 13 00:08:22.900349 kernel: SELinux: Initializing. May 13 00:08:22.900356 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 13 00:08:22.900363 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 13 00:08:22.900370 kernel: ACPI PPTT: PPTT table found, but unable to locate core 3 (3) May 13 00:08:22.900377 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 13 00:08:22.900384 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 13 00:08:22.900391 kernel: rcu: Hierarchical SRCU implementation. May 13 00:08:22.900399 kernel: rcu: Max phase no-delay instances is 400. May 13 00:08:22.900407 kernel: Platform MSI: ITS@0x8080000 domain created May 13 00:08:22.900414 kernel: PCI/MSI: ITS@0x8080000 domain created May 13 00:08:22.900421 kernel: Remapping and enabling EFI services. May 13 00:08:22.900427 kernel: smp: Bringing up secondary CPUs ... May 13 00:08:22.900434 kernel: Detected PIPT I-cache on CPU1 May 13 00:08:22.900442 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 May 13 00:08:22.900449 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 May 13 00:08:22.900456 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 00:08:22.900462 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] May 13 00:08:22.900471 kernel: Detected PIPT I-cache on CPU2 May 13 00:08:22.900478 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 May 13 00:08:22.900485 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 May 13 00:08:22.900497 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 00:08:22.900506 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] May 13 00:08:22.900513 kernel: Detected PIPT I-cache on CPU3 May 13 00:08:22.900521 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 May 13 00:08:22.900528 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 May 13 00:08:22.900536 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 00:08:22.900543 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] May 13 00:08:22.900550 kernel: smp: Brought up 1 node, 4 CPUs May 13 00:08:22.900560 kernel: SMP: Total of 4 processors activated. May 13 00:08:22.900567 kernel: CPU features: detected: 32-bit EL0 Support May 13 00:08:22.900574 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence May 13 00:08:22.900582 kernel: CPU features: detected: Common not Private translations May 13 00:08:22.900589 kernel: CPU features: detected: CRC32 instructions May 13 00:08:22.900596 kernel: CPU features: detected: Enhanced Virtualization Traps May 13 00:08:22.900605 kernel: CPU features: detected: RCpc load-acquire (LDAPR) May 13 00:08:22.900612 kernel: CPU features: detected: LSE atomic instructions May 13 00:08:22.900620 kernel: CPU features: detected: Privileged Access Never May 13 00:08:22.900627 kernel: CPU features: detected: RAS Extension Support May 13 00:08:22.900634 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) May 13 00:08:22.900641 kernel: CPU: All CPU(s) started at EL1 May 13 00:08:22.900648 kernel: alternatives: applying system-wide alternatives May 13 00:08:22.900656 kernel: devtmpfs: initialized May 13 00:08:22.900663 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 13 00:08:22.900671 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 13 00:08:22.900679 kernel: pinctrl core: initialized pinctrl subsystem May 13 00:08:22.900686 kernel: SMBIOS 3.0.0 present. May 13 00:08:22.900693 kernel: DMI: QEMU KVM Virtual Machine, BIOS edk2-20230524-3.fc38 05/24/2023 May 13 00:08:22.900701 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 13 00:08:22.900708 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations May 13 00:08:22.900715 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations May 13 00:08:22.900722 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations May 13 00:08:22.900729 kernel: audit: initializing netlink subsys (disabled) May 13 00:08:22.900738 kernel: audit: type=2000 audit(0.025:1): state=initialized audit_enabled=0 res=1 May 13 00:08:22.900745 kernel: thermal_sys: Registered thermal governor 'step_wise' May 13 00:08:22.900753 kernel: cpuidle: using governor menu May 13 00:08:22.900760 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. May 13 00:08:22.900767 kernel: ASID allocator initialised with 32768 entries May 13 00:08:22.900774 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 13 00:08:22.900782 kernel: Serial: AMBA PL011 UART driver May 13 00:08:22.900789 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL May 13 00:08:22.900797 kernel: Modules: 0 pages in range for non-PLT usage May 13 00:08:22.900806 kernel: Modules: 509008 pages in range for PLT usage May 13 00:08:22.900814 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 13 00:08:22.900822 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page May 13 00:08:22.900829 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages May 13 00:08:22.900837 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page May 13 00:08:22.900844 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 13 00:08:22.900851 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page May 13 00:08:22.900859 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages May 13 00:08:22.900866 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page May 13 00:08:22.900873 kernel: ACPI: Added _OSI(Module Device) May 13 00:08:22.900882 kernel: ACPI: Added _OSI(Processor Device) May 13 00:08:22.900889 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 13 00:08:22.900896 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 13 00:08:22.900904 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 13 00:08:22.900911 kernel: ACPI: Interpreter enabled May 13 00:08:22.900918 kernel: ACPI: Using GIC for interrupt routing May 13 00:08:22.900925 kernel: ACPI: MCFG table detected, 1 entries May 13 00:08:22.900933 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA May 13 00:08:22.900940 kernel: printk: console [ttyAMA0] enabled May 13 00:08:22.900948 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 13 00:08:22.901089 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 13 00:08:22.901164 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] May 13 00:08:22.901230 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] May 13 00:08:22.901412 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 May 13 00:08:22.901481 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] May 13 00:08:22.901491 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] May 13 00:08:22.901503 kernel: PCI host bridge to bus 0000:00 May 13 00:08:22.901574 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] May 13 00:08:22.901636 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] May 13 00:08:22.901709 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] May 13 00:08:22.901770 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 13 00:08:22.901868 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 May 13 00:08:22.901952 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 May 13 00:08:22.902037 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] May 13 00:08:22.902104 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] May 13 00:08:22.902170 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] May 13 00:08:22.902243 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] May 13 00:08:22.902325 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] May 13 00:08:22.902393 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] May 13 00:08:22.902456 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] May 13 00:08:22.902514 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] May 13 00:08:22.902577 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] May 13 00:08:22.902586 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 May 13 00:08:22.902594 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 May 13 00:08:22.902602 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 May 13 00:08:22.902609 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 May 13 00:08:22.902616 kernel: iommu: Default domain type: Translated May 13 00:08:22.902626 kernel: iommu: DMA domain TLB invalidation policy: strict mode May 13 00:08:22.902634 kernel: efivars: Registered efivars operations May 13 00:08:22.902641 kernel: vgaarb: loaded May 13 00:08:22.902648 kernel: clocksource: Switched to clocksource arch_sys_counter May 13 00:08:22.902656 kernel: VFS: Disk quotas dquot_6.6.0 May 13 00:08:22.902663 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 13 00:08:22.902670 kernel: pnp: PnP ACPI init May 13 00:08:22.902743 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved May 13 00:08:22.902754 kernel: pnp: PnP ACPI: found 1 devices May 13 00:08:22.902763 kernel: NET: Registered PF_INET protocol family May 13 00:08:22.902771 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 13 00:08:22.902778 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 13 00:08:22.902785 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 13 00:08:22.902793 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 13 00:08:22.902800 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 13 00:08:22.902808 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 13 00:08:22.902815 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 13 00:08:22.902823 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 13 00:08:22.902831 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 13 00:08:22.902838 kernel: PCI: CLS 0 bytes, default 64 May 13 00:08:22.902845 kernel: kvm [1]: HYP mode not available May 13 00:08:22.902853 kernel: Initialise system trusted keyrings May 13 00:08:22.902860 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 13 00:08:22.902868 kernel: Key type asymmetric registered May 13 00:08:22.902875 kernel: Asymmetric key parser 'x509' registered May 13 00:08:22.902886 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) May 13 00:08:22.902897 kernel: io scheduler mq-deadline registered May 13 00:08:22.902906 kernel: io scheduler kyber registered May 13 00:08:22.902913 kernel: io scheduler bfq registered May 13 00:08:22.902921 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 May 13 00:08:22.902928 kernel: ACPI: button: Power Button [PWRB] May 13 00:08:22.902936 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 May 13 00:08:22.903004 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) May 13 00:08:22.903014 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 13 00:08:22.903022 kernel: thunder_xcv, ver 1.0 May 13 00:08:22.903029 kernel: thunder_bgx, ver 1.0 May 13 00:08:22.903038 kernel: nicpf, ver 1.0 May 13 00:08:22.903045 kernel: nicvf, ver 1.0 May 13 00:08:22.903133 kernel: rtc-efi rtc-efi.0: registered as rtc0 May 13 00:08:22.903196 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-05-13T00:08:22 UTC (1747094902) May 13 00:08:22.903207 kernel: hid: raw HID events driver (C) Jiri Kosina May 13 00:08:22.903214 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available May 13 00:08:22.903222 kernel: watchdog: Delayed init of the lockup detector failed: -19 May 13 00:08:22.903229 kernel: watchdog: Hard watchdog permanently disabled May 13 00:08:22.903245 kernel: NET: Registered PF_INET6 protocol family May 13 00:08:22.903253 kernel: Segment Routing with IPv6 May 13 00:08:22.903267 kernel: In-situ OAM (IOAM) with IPv6 May 13 00:08:22.903275 kernel: NET: Registered PF_PACKET protocol family May 13 00:08:22.903283 kernel: Key type dns_resolver registered May 13 00:08:22.903290 kernel: registered taskstats version 1 May 13 00:08:22.903297 kernel: Loading compiled-in X.509 certificates May 13 00:08:22.903305 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.89-flatcar: ce22d51a4ec909274ada9cb7da7d7cb78db539c6' May 13 00:08:22.903312 kernel: Key type .fscrypt registered May 13 00:08:22.903322 kernel: Key type fscrypt-provisioning registered May 13 00:08:22.903329 kernel: ima: No TPM chip found, activating TPM-bypass! May 13 00:08:22.903337 kernel: ima: Allocated hash algorithm: sha1 May 13 00:08:22.903344 kernel: ima: No architecture policies found May 13 00:08:22.903352 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) May 13 00:08:22.903359 kernel: clk: Disabling unused clocks May 13 00:08:22.903366 kernel: Freeing unused kernel memory: 39424K May 13 00:08:22.903374 kernel: Run /init as init process May 13 00:08:22.903381 kernel: with arguments: May 13 00:08:22.903390 kernel: /init May 13 00:08:22.903398 kernel: with environment: May 13 00:08:22.903405 kernel: HOME=/ May 13 00:08:22.903413 kernel: TERM=linux May 13 00:08:22.903420 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 13 00:08:22.903429 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 13 00:08:22.903439 systemd[1]: Detected virtualization kvm. May 13 00:08:22.903448 systemd[1]: Detected architecture arm64. May 13 00:08:22.903456 systemd[1]: Running in initrd. May 13 00:08:22.903464 systemd[1]: No hostname configured, using default hostname. May 13 00:08:22.903472 systemd[1]: Hostname set to . May 13 00:08:22.903480 systemd[1]: Initializing machine ID from VM UUID. May 13 00:08:22.903488 systemd[1]: Queued start job for default target initrd.target. May 13 00:08:22.903496 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 13 00:08:22.903504 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 13 00:08:22.903514 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 13 00:08:22.903522 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 13 00:08:22.903530 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 13 00:08:22.903539 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 13 00:08:22.903548 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 13 00:08:22.903557 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 13 00:08:22.903565 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 13 00:08:22.903575 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 13 00:08:22.903583 systemd[1]: Reached target paths.target - Path Units. May 13 00:08:22.903591 systemd[1]: Reached target slices.target - Slice Units. May 13 00:08:22.903599 systemd[1]: Reached target swap.target - Swaps. May 13 00:08:22.903607 systemd[1]: Reached target timers.target - Timer Units. May 13 00:08:22.903615 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 13 00:08:22.903623 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 13 00:08:22.903631 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 13 00:08:22.903639 systemd[1]: Listening on systemd-journald.socket - Journal Socket. May 13 00:08:22.903649 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 13 00:08:22.903657 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 13 00:08:22.903665 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 13 00:08:22.903674 systemd[1]: Reached target sockets.target - Socket Units. May 13 00:08:22.903682 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 13 00:08:22.903690 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 13 00:08:22.903699 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 13 00:08:22.903707 systemd[1]: Starting systemd-fsck-usr.service... May 13 00:08:22.903716 systemd[1]: Starting systemd-journald.service - Journal Service... May 13 00:08:22.903724 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 13 00:08:22.903732 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 00:08:22.903741 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 13 00:08:22.903749 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 13 00:08:22.903757 systemd[1]: Finished systemd-fsck-usr.service. May 13 00:08:22.903767 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 13 00:08:22.903776 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 13 00:08:22.903803 systemd-journald[238]: Collecting audit messages is disabled. May 13 00:08:22.903825 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 13 00:08:22.903833 systemd-journald[238]: Journal started May 13 00:08:22.903852 systemd-journald[238]: Runtime Journal (/run/log/journal/bfd416ef68ee4b6fb3e6abeaab294dd8) is 5.9M, max 47.3M, 41.4M free. May 13 00:08:22.897192 systemd-modules-load[239]: Inserted module 'overlay' May 13 00:08:22.906194 systemd[1]: Started systemd-journald.service - Journal Service. May 13 00:08:22.907575 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 13 00:08:22.913200 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 13 00:08:22.913222 kernel: Bridge firewalling registered May 13 00:08:22.912492 systemd-modules-load[239]: Inserted module 'br_netfilter' May 13 00:08:22.913415 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 13 00:08:22.915884 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 13 00:08:22.917766 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 13 00:08:22.922217 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 13 00:08:22.927609 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 13 00:08:22.932864 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 13 00:08:22.935603 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 13 00:08:22.949426 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 13 00:08:22.950651 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 00:08:22.956448 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 13 00:08:22.969311 dracut-cmdline[280]: dracut-dracut-053 May 13 00:08:22.973363 dracut-cmdline[280]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=c683f9f6a9915f3c14a7bce5c93750f29fcd5cf6eb0774e11e882c5681cc19c0 May 13 00:08:22.974986 systemd-resolved[278]: Positive Trust Anchors: May 13 00:08:22.974996 systemd-resolved[278]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 13 00:08:22.975028 systemd-resolved[278]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 13 00:08:22.979854 systemd-resolved[278]: Defaulting to hostname 'linux'. May 13 00:08:22.980936 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 13 00:08:22.986748 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 13 00:08:23.057276 kernel: SCSI subsystem initialized May 13 00:08:23.061283 kernel: Loading iSCSI transport class v2.0-870. May 13 00:08:23.071300 kernel: iscsi: registered transport (tcp) May 13 00:08:23.086626 kernel: iscsi: registered transport (qla4xxx) May 13 00:08:23.086656 kernel: QLogic iSCSI HBA Driver May 13 00:08:23.133326 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 13 00:08:23.149487 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 13 00:08:23.166742 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 13 00:08:23.166813 kernel: device-mapper: uevent: version 1.0.3 May 13 00:08:23.168281 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 13 00:08:23.216297 kernel: raid6: neonx8 gen() 15774 MB/s May 13 00:08:23.233289 kernel: raid6: neonx4 gen() 15644 MB/s May 13 00:08:23.250283 kernel: raid6: neonx2 gen() 13223 MB/s May 13 00:08:23.267284 kernel: raid6: neonx1 gen() 10466 MB/s May 13 00:08:23.284282 kernel: raid6: int64x8 gen() 6952 MB/s May 13 00:08:23.301284 kernel: raid6: int64x4 gen() 7344 MB/s May 13 00:08:23.318284 kernel: raid6: int64x2 gen() 6124 MB/s May 13 00:08:23.335392 kernel: raid6: int64x1 gen() 5043 MB/s May 13 00:08:23.335408 kernel: raid6: using algorithm neonx8 gen() 15774 MB/s May 13 00:08:23.353328 kernel: raid6: .... xor() 11934 MB/s, rmw enabled May 13 00:08:23.353342 kernel: raid6: using neon recovery algorithm May 13 00:08:23.359658 kernel: xor: measuring software checksum speed May 13 00:08:23.359675 kernel: 8regs : 19764 MB/sec May 13 00:08:23.360369 kernel: 32regs : 19655 MB/sec May 13 00:08:23.361612 kernel: arm64_neon : 26963 MB/sec May 13 00:08:23.361623 kernel: xor: using function: arm64_neon (26963 MB/sec) May 13 00:08:23.413295 kernel: Btrfs loaded, zoned=no, fsverity=no May 13 00:08:23.425333 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 13 00:08:23.433655 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 13 00:08:23.445513 systemd-udevd[461]: Using default interface naming scheme 'v255'. May 13 00:08:23.448821 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 13 00:08:23.459474 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 13 00:08:23.472408 dracut-pre-trigger[470]: rd.md=0: removing MD RAID activation May 13 00:08:23.501196 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 13 00:08:23.513542 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 13 00:08:23.554167 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 13 00:08:23.565502 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 13 00:08:23.578862 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 13 00:08:23.580766 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 13 00:08:23.583073 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 13 00:08:23.585478 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 13 00:08:23.592422 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 13 00:08:23.607111 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 13 00:08:23.616295 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues May 13 00:08:23.618279 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 13 00:08:23.618409 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 13 00:08:23.619749 kernel: GPT:9289727 != 19775487 May 13 00:08:23.619789 kernel: GPT:Alternate GPT header not at the end of the disk. May 13 00:08:23.620841 kernel: GPT:9289727 != 19775487 May 13 00:08:23.620876 kernel: GPT: Use GNU Parted to correct GPT errors. May 13 00:08:23.621488 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 00:08:23.624112 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 13 00:08:23.624229 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 00:08:23.631899 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 13 00:08:23.633436 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 13 00:08:23.633578 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 13 00:08:23.636989 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 13 00:08:23.647288 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (524) May 13 00:08:23.649544 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 00:08:23.655281 kernel: BTRFS: device fsid ffc5eb33-beca-4ca0-9735-b9a50e66f21e devid 1 transid 40 /dev/vda3 scanned by (udev-worker) (509) May 13 00:08:23.661890 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 13 00:08:23.663495 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 13 00:08:23.671236 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 13 00:08:23.681224 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 13 00:08:23.685125 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 13 00:08:23.686338 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 13 00:08:23.695423 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 13 00:08:23.697322 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 13 00:08:23.707373 disk-uuid[551]: Primary Header is updated. May 13 00:08:23.707373 disk-uuid[551]: Secondary Entries is updated. May 13 00:08:23.707373 disk-uuid[551]: Secondary Header is updated. May 13 00:08:23.711718 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 00:08:23.724744 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 00:08:24.725289 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 00:08:24.725582 disk-uuid[553]: The operation has completed successfully. May 13 00:08:24.748810 systemd[1]: disk-uuid.service: Deactivated successfully. May 13 00:08:24.748903 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 13 00:08:24.769427 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 13 00:08:24.773351 sh[575]: Success May 13 00:08:24.789670 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" May 13 00:08:24.827676 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 13 00:08:24.829416 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 13 00:08:24.831168 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 13 00:08:24.840367 kernel: BTRFS info (device dm-0): first mount of filesystem ffc5eb33-beca-4ca0-9735-b9a50e66f21e May 13 00:08:24.840398 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm May 13 00:08:24.840409 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 13 00:08:24.842766 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 13 00:08:24.842780 kernel: BTRFS info (device dm-0): using free space tree May 13 00:08:24.846731 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 13 00:08:24.847731 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 13 00:08:24.858407 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 13 00:08:24.859870 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 13 00:08:24.867192 kernel: BTRFS info (device vda6): first mount of filesystem 0068254f-7e0d-4c83-ad3e-204802432981 May 13 00:08:24.867245 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 13 00:08:24.867256 kernel: BTRFS info (device vda6): using free space tree May 13 00:08:24.870315 kernel: BTRFS info (device vda6): auto enabling async discard May 13 00:08:24.876761 systemd[1]: mnt-oem.mount: Deactivated successfully. May 13 00:08:24.878435 kernel: BTRFS info (device vda6): last unmount of filesystem 0068254f-7e0d-4c83-ad3e-204802432981 May 13 00:08:24.884923 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 13 00:08:24.892425 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 13 00:08:24.948550 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 13 00:08:24.961431 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 13 00:08:24.984097 systemd-networkd[763]: lo: Link UP May 13 00:08:24.984108 systemd-networkd[763]: lo: Gained carrier May 13 00:08:24.984881 systemd-networkd[763]: Enumeration completed May 13 00:08:24.984987 systemd[1]: Started systemd-networkd.service - Network Configuration. May 13 00:08:24.985311 systemd-networkd[763]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 00:08:24.985315 systemd-networkd[763]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 13 00:08:24.986191 systemd-networkd[763]: eth0: Link UP May 13 00:08:24.993764 ignition[672]: Ignition 2.19.0 May 13 00:08:24.986195 systemd-networkd[763]: eth0: Gained carrier May 13 00:08:24.993771 ignition[672]: Stage: fetch-offline May 13 00:08:24.986202 systemd-networkd[763]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 00:08:24.993804 ignition[672]: no configs at "/usr/lib/ignition/base.d" May 13 00:08:24.986563 systemd[1]: Reached target network.target - Network. May 13 00:08:24.993812 ignition[672]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 00:08:24.993962 ignition[672]: parsed url from cmdline: "" May 13 00:08:25.003317 systemd-networkd[763]: eth0: DHCPv4 address 10.0.0.16/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 13 00:08:24.993965 ignition[672]: no config URL provided May 13 00:08:24.993969 ignition[672]: reading system config file "/usr/lib/ignition/user.ign" May 13 00:08:24.993976 ignition[672]: no config at "/usr/lib/ignition/user.ign" May 13 00:08:24.993998 ignition[672]: op(1): [started] loading QEMU firmware config module May 13 00:08:24.994005 ignition[672]: op(1): executing: "modprobe" "qemu_fw_cfg" May 13 00:08:25.001339 ignition[672]: op(1): [finished] loading QEMU firmware config module May 13 00:08:25.046121 ignition[672]: parsing config with SHA512: 3181f34a0a3c21a5b11d579ab03cf7fc912608bd9dfdbb7b594ca002e64c489b5230e9187a6d3fb0cd3e9dfe148ba261b11588f5d018f59e1f26d8cf2344d389 May 13 00:08:25.050326 unknown[672]: fetched base config from "system" May 13 00:08:25.050336 unknown[672]: fetched user config from "qemu" May 13 00:08:25.053601 ignition[672]: fetch-offline: fetch-offline passed May 13 00:08:25.055039 ignition[672]: Ignition finished successfully May 13 00:08:25.056154 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 13 00:08:25.057787 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 13 00:08:25.067413 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 13 00:08:25.078033 ignition[771]: Ignition 2.19.0 May 13 00:08:25.078043 ignition[771]: Stage: kargs May 13 00:08:25.078204 ignition[771]: no configs at "/usr/lib/ignition/base.d" May 13 00:08:25.078213 ignition[771]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 00:08:25.079071 ignition[771]: kargs: kargs passed May 13 00:08:25.082728 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 13 00:08:25.079112 ignition[771]: Ignition finished successfully May 13 00:08:25.085437 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 13 00:08:25.097911 ignition[779]: Ignition 2.19.0 May 13 00:08:25.097921 ignition[779]: Stage: disks May 13 00:08:25.098075 ignition[779]: no configs at "/usr/lib/ignition/base.d" May 13 00:08:25.098085 ignition[779]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 00:08:25.098919 ignition[779]: disks: disks passed May 13 00:08:25.101320 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 13 00:08:25.098961 ignition[779]: Ignition finished successfully May 13 00:08:25.103105 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 13 00:08:25.104407 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 13 00:08:25.106271 systemd[1]: Reached target local-fs.target - Local File Systems. May 13 00:08:25.107805 systemd[1]: Reached target sysinit.target - System Initialization. May 13 00:08:25.109596 systemd[1]: Reached target basic.target - Basic System. May 13 00:08:25.121430 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 13 00:08:25.131274 systemd-fsck[790]: ROOT: clean, 14/553520 files, 52654/553472 blocks May 13 00:08:25.135466 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 13 00:08:25.147360 systemd[1]: Mounting sysroot.mount - /sysroot... May 13 00:08:25.192288 kernel: EXT4-fs (vda9): mounted filesystem 9903c37e-4e5a-41d4-80e5-5c3428d04b7e r/w with ordered data mode. Quota mode: none. May 13 00:08:25.192784 systemd[1]: Mounted sysroot.mount - /sysroot. May 13 00:08:25.194042 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 13 00:08:25.213356 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 13 00:08:25.215592 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 13 00:08:25.216623 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 13 00:08:25.216663 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 13 00:08:25.216684 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 13 00:08:25.222638 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 13 00:08:25.224363 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 13 00:08:25.230258 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (798) May 13 00:08:25.230436 kernel: BTRFS info (device vda6): first mount of filesystem 0068254f-7e0d-4c83-ad3e-204802432981 May 13 00:08:25.230455 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 13 00:08:25.231943 kernel: BTRFS info (device vda6): using free space tree May 13 00:08:25.234278 kernel: BTRFS info (device vda6): auto enabling async discard May 13 00:08:25.235415 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 13 00:08:25.264799 initrd-setup-root[823]: cut: /sysroot/etc/passwd: No such file or directory May 13 00:08:25.268440 initrd-setup-root[830]: cut: /sysroot/etc/group: No such file or directory May 13 00:08:25.272362 initrd-setup-root[837]: cut: /sysroot/etc/shadow: No such file or directory May 13 00:08:25.276168 initrd-setup-root[844]: cut: /sysroot/etc/gshadow: No such file or directory May 13 00:08:25.343933 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 13 00:08:25.355346 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 13 00:08:25.357739 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 13 00:08:25.362296 kernel: BTRFS info (device vda6): last unmount of filesystem 0068254f-7e0d-4c83-ad3e-204802432981 May 13 00:08:25.379557 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 13 00:08:25.381343 ignition[913]: INFO : Ignition 2.19.0 May 13 00:08:25.381343 ignition[913]: INFO : Stage: mount May 13 00:08:25.381343 ignition[913]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 00:08:25.381343 ignition[913]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 00:08:25.385728 ignition[913]: INFO : mount: mount passed May 13 00:08:25.385728 ignition[913]: INFO : Ignition finished successfully May 13 00:08:25.383464 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 13 00:08:25.389345 systemd[1]: Starting ignition-files.service - Ignition (files)... May 13 00:08:25.839392 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 13 00:08:25.851441 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 13 00:08:25.857746 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (925) May 13 00:08:25.857780 kernel: BTRFS info (device vda6): first mount of filesystem 0068254f-7e0d-4c83-ad3e-204802432981 May 13 00:08:25.857791 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 13 00:08:25.859299 kernel: BTRFS info (device vda6): using free space tree May 13 00:08:25.861279 kernel: BTRFS info (device vda6): auto enabling async discard May 13 00:08:25.862446 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 13 00:08:25.878780 ignition[942]: INFO : Ignition 2.19.0 May 13 00:08:25.880345 ignition[942]: INFO : Stage: files May 13 00:08:25.880345 ignition[942]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 00:08:25.880345 ignition[942]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 00:08:25.883442 ignition[942]: DEBUG : files: compiled without relabeling support, skipping May 13 00:08:25.883442 ignition[942]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 13 00:08:25.883442 ignition[942]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 13 00:08:25.883442 ignition[942]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 13 00:08:25.888496 ignition[942]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 13 00:08:25.888496 ignition[942]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 13 00:08:25.888496 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 13 00:08:25.888496 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 May 13 00:08:25.883803 unknown[942]: wrote ssh authorized keys file for user: core May 13 00:08:25.951805 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 13 00:08:26.138871 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 13 00:08:26.138871 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" May 13 00:08:26.143538 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" May 13 00:08:26.143538 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" May 13 00:08:26.143538 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" May 13 00:08:26.143538 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 13 00:08:26.143538 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 13 00:08:26.143538 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 13 00:08:26.143538 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 13 00:08:26.143538 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" May 13 00:08:26.143538 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 13 00:08:26.143538 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" May 13 00:08:26.143538 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" May 13 00:08:26.143538 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" May 13 00:08:26.143538 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-arm64.raw: attempt #1 May 13 00:08:26.411475 systemd-networkd[763]: eth0: Gained IPv6LL May 13 00:08:26.456014 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK May 13 00:08:26.860001 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" May 13 00:08:26.860001 ignition[942]: INFO : files: op(b): [started] processing unit "prepare-helm.service" May 13 00:08:26.860001 ignition[942]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 13 00:08:26.874986 ignition[942]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 13 00:08:26.874986 ignition[942]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" May 13 00:08:26.874986 ignition[942]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" May 13 00:08:26.874986 ignition[942]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 13 00:08:26.874986 ignition[942]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 13 00:08:26.874986 ignition[942]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" May 13 00:08:26.874986 ignition[942]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" May 13 00:08:26.894896 ignition[942]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" May 13 00:08:26.894896 ignition[942]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 13 00:08:26.894896 ignition[942]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" May 13 00:08:26.894896 ignition[942]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" May 13 00:08:26.894896 ignition[942]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" May 13 00:08:26.894896 ignition[942]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" May 13 00:08:26.894896 ignition[942]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" May 13 00:08:26.894896 ignition[942]: INFO : files: files passed May 13 00:08:26.894896 ignition[942]: INFO : Ignition finished successfully May 13 00:08:26.896560 systemd[1]: Finished ignition-files.service - Ignition (files). May 13 00:08:26.910468 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 13 00:08:26.913361 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 13 00:08:26.914930 systemd[1]: ignition-quench.service: Deactivated successfully. May 13 00:08:26.915012 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 13 00:08:26.921959 initrd-setup-root-after-ignition[971]: grep: /sysroot/oem/oem-release: No such file or directory May 13 00:08:26.925532 initrd-setup-root-after-ignition[973]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 13 00:08:26.925532 initrd-setup-root-after-ignition[973]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 13 00:08:26.928708 initrd-setup-root-after-ignition[977]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 13 00:08:26.929147 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 13 00:08:26.931925 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 13 00:08:26.941457 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 13 00:08:26.961588 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 13 00:08:26.961721 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 13 00:08:26.964010 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 13 00:08:26.965875 systemd[1]: Reached target initrd.target - Initrd Default Target. May 13 00:08:26.967700 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 13 00:08:26.968527 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 13 00:08:26.984564 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 13 00:08:26.994453 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 13 00:08:27.002364 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 13 00:08:27.003691 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 13 00:08:27.005657 systemd[1]: Stopped target timers.target - Timer Units. May 13 00:08:27.006617 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 13 00:08:27.006749 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 13 00:08:27.009752 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 13 00:08:27.010862 systemd[1]: Stopped target basic.target - Basic System. May 13 00:08:27.012514 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 13 00:08:27.014547 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 13 00:08:27.016431 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 13 00:08:27.018167 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 13 00:08:27.019996 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 13 00:08:27.022141 systemd[1]: Stopped target sysinit.target - System Initialization. May 13 00:08:27.025490 systemd[1]: Stopped target local-fs.target - Local File Systems. May 13 00:08:27.027391 systemd[1]: Stopped target swap.target - Swaps. May 13 00:08:27.029315 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 13 00:08:27.029453 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 13 00:08:27.032155 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 13 00:08:27.033369 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 13 00:08:27.035156 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 13 00:08:27.039003 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 13 00:08:27.040258 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 13 00:08:27.040397 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 13 00:08:27.043331 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 13 00:08:27.043451 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 13 00:08:27.045363 systemd[1]: Stopped target paths.target - Path Units. May 13 00:08:27.047166 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 13 00:08:27.050520 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 13 00:08:27.052738 systemd[1]: Stopped target slices.target - Slice Units. May 13 00:08:27.055094 systemd[1]: Stopped target sockets.target - Socket Units. May 13 00:08:27.056924 systemd[1]: iscsid.socket: Deactivated successfully. May 13 00:08:27.057022 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 13 00:08:27.058656 systemd[1]: iscsiuio.socket: Deactivated successfully. May 13 00:08:27.058741 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 13 00:08:27.060279 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 13 00:08:27.060404 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 13 00:08:27.062419 systemd[1]: ignition-files.service: Deactivated successfully. May 13 00:08:27.062522 systemd[1]: Stopped ignition-files.service - Ignition (files). May 13 00:08:27.078491 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 13 00:08:27.080191 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 13 00:08:27.081086 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 13 00:08:27.081215 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 13 00:08:27.083305 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 13 00:08:27.083413 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 13 00:08:27.090076 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 13 00:08:27.093298 ignition[997]: INFO : Ignition 2.19.0 May 13 00:08:27.093298 ignition[997]: INFO : Stage: umount May 13 00:08:27.093298 ignition[997]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 00:08:27.093298 ignition[997]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 00:08:27.093298 ignition[997]: INFO : umount: umount passed May 13 00:08:27.093298 ignition[997]: INFO : Ignition finished successfully May 13 00:08:27.092974 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 13 00:08:27.095139 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 13 00:08:27.096759 systemd[1]: ignition-mount.service: Deactivated successfully. May 13 00:08:27.096854 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 13 00:08:27.102256 systemd[1]: Stopped target network.target - Network. May 13 00:08:27.104151 systemd[1]: ignition-disks.service: Deactivated successfully. May 13 00:08:27.104251 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 13 00:08:27.105915 systemd[1]: ignition-kargs.service: Deactivated successfully. May 13 00:08:27.105962 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 13 00:08:27.108325 systemd[1]: ignition-setup.service: Deactivated successfully. May 13 00:08:27.108385 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 13 00:08:27.113078 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 13 00:08:27.113135 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 13 00:08:27.116580 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 13 00:08:27.118029 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 13 00:08:27.125860 systemd[1]: systemd-resolved.service: Deactivated successfully. May 13 00:08:27.125974 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 13 00:08:27.128003 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 13 00:08:27.128079 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 13 00:08:27.130316 systemd-networkd[763]: eth0: DHCPv6 lease lost May 13 00:08:27.131871 systemd[1]: systemd-networkd.service: Deactivated successfully. May 13 00:08:27.132012 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 13 00:08:27.133718 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 13 00:08:27.133748 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 13 00:08:27.145377 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 13 00:08:27.146331 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 13 00:08:27.146395 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 13 00:08:27.148483 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 13 00:08:27.148530 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 13 00:08:27.150430 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 13 00:08:27.150477 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 13 00:08:27.152768 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 13 00:08:27.161342 systemd[1]: sysroot-boot.service: Deactivated successfully. May 13 00:08:27.161446 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 13 00:08:27.163527 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 13 00:08:27.163587 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 13 00:08:27.165472 systemd[1]: network-cleanup.service: Deactivated successfully. May 13 00:08:27.167287 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 13 00:08:27.178913 systemd[1]: systemd-udevd.service: Deactivated successfully. May 13 00:08:27.179053 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 13 00:08:27.181522 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 13 00:08:27.181562 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 13 00:08:27.183400 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 13 00:08:27.183435 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 13 00:08:27.185234 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 13 00:08:27.185296 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 13 00:08:27.187907 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 13 00:08:27.187950 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 13 00:08:27.190840 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 13 00:08:27.190882 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 00:08:27.205443 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 13 00:08:27.206493 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 13 00:08:27.206551 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 13 00:08:27.208693 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. May 13 00:08:27.208737 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 13 00:08:27.210701 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 13 00:08:27.210744 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 13 00:08:27.212822 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 13 00:08:27.212866 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 13 00:08:27.214994 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 13 00:08:27.215094 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 13 00:08:27.218507 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 13 00:08:27.220796 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 13 00:08:27.230809 systemd[1]: Switching root. May 13 00:08:27.255316 systemd-journald[238]: Journal stopped May 13 00:08:27.991073 systemd-journald[238]: Received SIGTERM from PID 1 (systemd). May 13 00:08:27.991132 kernel: SELinux: policy capability network_peer_controls=1 May 13 00:08:27.991145 kernel: SELinux: policy capability open_perms=1 May 13 00:08:27.991155 kernel: SELinux: policy capability extended_socket_class=1 May 13 00:08:27.991165 kernel: SELinux: policy capability always_check_network=0 May 13 00:08:27.991179 kernel: SELinux: policy capability cgroup_seclabel=1 May 13 00:08:27.991189 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 13 00:08:27.991198 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 13 00:08:27.991208 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 13 00:08:27.991231 kernel: audit: type=1403 audit(1747094907.387:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 13 00:08:27.991242 systemd[1]: Successfully loaded SELinux policy in 34.869ms. May 13 00:08:27.991317 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.465ms. May 13 00:08:27.991334 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 13 00:08:27.991346 systemd[1]: Detected virtualization kvm. May 13 00:08:27.991356 systemd[1]: Detected architecture arm64. May 13 00:08:27.991367 systemd[1]: Detected first boot. May 13 00:08:27.991377 systemd[1]: Initializing machine ID from VM UUID. May 13 00:08:27.991392 zram_generator::config[1042]: No configuration found. May 13 00:08:27.991420 systemd[1]: Populated /etc with preset unit settings. May 13 00:08:27.991431 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 13 00:08:27.991442 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 13 00:08:27.991452 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 13 00:08:27.991464 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 13 00:08:27.991475 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 13 00:08:27.991485 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 13 00:08:27.991495 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 13 00:08:27.991508 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 13 00:08:27.991519 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 13 00:08:27.991530 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 13 00:08:27.991541 systemd[1]: Created slice user.slice - User and Session Slice. May 13 00:08:27.991555 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 13 00:08:27.991566 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 13 00:08:27.991578 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 13 00:08:27.991589 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 13 00:08:27.991600 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 13 00:08:27.991613 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 13 00:08:27.991623 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... May 13 00:08:27.991634 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 13 00:08:27.991644 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 13 00:08:27.991656 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 13 00:08:27.991668 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 13 00:08:27.991679 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 13 00:08:27.991691 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 13 00:08:27.991702 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 13 00:08:27.991712 systemd[1]: Reached target slices.target - Slice Units. May 13 00:08:27.991723 systemd[1]: Reached target swap.target - Swaps. May 13 00:08:27.991734 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 13 00:08:27.991744 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 13 00:08:27.991755 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 13 00:08:27.991765 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 13 00:08:27.991775 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 13 00:08:27.991786 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 13 00:08:27.991799 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 13 00:08:27.991809 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 13 00:08:27.991820 systemd[1]: Mounting media.mount - External Media Directory... May 13 00:08:27.991830 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 13 00:08:27.991841 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 13 00:08:27.991851 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 13 00:08:27.991862 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 13 00:08:27.991872 systemd[1]: Reached target machines.target - Containers. May 13 00:08:27.991885 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 13 00:08:27.991896 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 13 00:08:27.991907 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 13 00:08:27.991917 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 13 00:08:27.991927 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 13 00:08:27.991940 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 13 00:08:27.991951 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 13 00:08:27.991961 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 13 00:08:27.991972 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 13 00:08:27.991984 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 13 00:08:27.991996 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 13 00:08:27.992007 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 13 00:08:27.992017 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 13 00:08:27.992028 systemd[1]: Stopped systemd-fsck-usr.service. May 13 00:08:27.992038 kernel: fuse: init (API version 7.39) May 13 00:08:27.992048 systemd[1]: Starting systemd-journald.service - Journal Service... May 13 00:08:27.992059 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 13 00:08:27.992069 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 13 00:08:27.992082 kernel: ACPI: bus type drm_connector registered May 13 00:08:27.992093 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 13 00:08:27.992103 kernel: loop: module loaded May 13 00:08:27.992113 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 13 00:08:27.992123 systemd[1]: verity-setup.service: Deactivated successfully. May 13 00:08:27.992134 systemd[1]: Stopped verity-setup.service. May 13 00:08:27.992144 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 13 00:08:27.992155 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 13 00:08:27.992166 systemd[1]: Mounted media.mount - External Media Directory. May 13 00:08:27.992178 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 13 00:08:27.992189 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 13 00:08:27.992202 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 13 00:08:27.992242 systemd-journald[1109]: Collecting audit messages is disabled. May 13 00:08:27.992276 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 13 00:08:27.992289 systemd-journald[1109]: Journal started May 13 00:08:27.992311 systemd-journald[1109]: Runtime Journal (/run/log/journal/bfd416ef68ee4b6fb3e6abeaab294dd8) is 5.9M, max 47.3M, 41.4M free. May 13 00:08:27.766626 systemd[1]: Queued start job for default target multi-user.target. May 13 00:08:27.791100 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 13 00:08:27.791661 systemd[1]: systemd-journald.service: Deactivated successfully. May 13 00:08:27.996371 systemd[1]: Started systemd-journald.service - Journal Service. May 13 00:08:27.997160 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 13 00:08:27.998616 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 13 00:08:27.998763 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 13 00:08:28.001115 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 00:08:28.002335 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 13 00:08:28.004375 systemd[1]: modprobe@drm.service: Deactivated successfully. May 13 00:08:28.004650 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 13 00:08:28.006008 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 00:08:28.006141 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 13 00:08:28.007925 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 13 00:08:28.008126 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 13 00:08:28.010641 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 00:08:28.010776 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 13 00:08:28.014362 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 13 00:08:28.018302 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 13 00:08:28.019901 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 13 00:08:28.029327 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 13 00:08:28.036650 systemd[1]: Reached target network-pre.target - Preparation for Network. May 13 00:08:28.048465 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 13 00:08:28.053349 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 13 00:08:28.054464 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 13 00:08:28.054508 systemd[1]: Reached target local-fs.target - Local File Systems. May 13 00:08:28.056855 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). May 13 00:08:28.062198 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 13 00:08:28.064475 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 13 00:08:28.065608 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 00:08:28.068605 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 13 00:08:28.070652 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 13 00:08:28.071923 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 00:08:28.075454 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 13 00:08:28.079116 systemd-journald[1109]: Time spent on flushing to /var/log/journal/bfd416ef68ee4b6fb3e6abeaab294dd8 is 15.572ms for 854 entries. May 13 00:08:28.079116 systemd-journald[1109]: System Journal (/var/log/journal/bfd416ef68ee4b6fb3e6abeaab294dd8) is 8.0M, max 195.6M, 187.6M free. May 13 00:08:28.101877 systemd-journald[1109]: Received client request to flush runtime journal. May 13 00:08:28.076653 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 13 00:08:28.080439 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 13 00:08:28.083804 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 13 00:08:28.089616 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 13 00:08:28.094668 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 13 00:08:28.097089 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 13 00:08:28.098518 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 13 00:08:28.100147 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 13 00:08:28.101703 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 13 00:08:28.107379 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 13 00:08:28.110632 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 13 00:08:28.118474 kernel: loop0: detected capacity change from 0 to 114328 May 13 00:08:28.121520 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... May 13 00:08:28.124298 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 13 00:08:28.125054 systemd-tmpfiles[1155]: ACLs are not supported, ignoring. May 13 00:08:28.125067 systemd-tmpfiles[1155]: ACLs are not supported, ignoring. May 13 00:08:28.129758 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 13 00:08:28.135203 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 13 00:08:28.137222 udevadm[1157]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. May 13 00:08:28.138284 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 13 00:08:28.154090 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 13 00:08:28.154795 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. May 13 00:08:28.167301 kernel: loop1: detected capacity change from 0 to 189592 May 13 00:08:28.185923 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 13 00:08:28.196517 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 13 00:08:28.208619 systemd-tmpfiles[1177]: ACLs are not supported, ignoring. May 13 00:08:28.208638 systemd-tmpfiles[1177]: ACLs are not supported, ignoring. May 13 00:08:28.212664 kernel: loop2: detected capacity change from 0 to 114432 May 13 00:08:28.212407 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 13 00:08:28.263311 kernel: loop3: detected capacity change from 0 to 114328 May 13 00:08:28.269306 kernel: loop4: detected capacity change from 0 to 189592 May 13 00:08:28.278291 kernel: loop5: detected capacity change from 0 to 114432 May 13 00:08:28.282000 (sd-merge)[1182]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. May 13 00:08:28.282438 (sd-merge)[1182]: Merged extensions into '/usr'. May 13 00:08:28.288563 systemd[1]: Reloading requested from client PID 1154 ('systemd-sysext') (unit systemd-sysext.service)... May 13 00:08:28.288616 systemd[1]: Reloading... May 13 00:08:28.351294 zram_generator::config[1205]: No configuration found. May 13 00:08:28.431964 ldconfig[1149]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 13 00:08:28.464238 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 00:08:28.500062 systemd[1]: Reloading finished in 210 ms. May 13 00:08:28.532088 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 13 00:08:28.533564 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 13 00:08:28.548480 systemd[1]: Starting ensure-sysext.service... May 13 00:08:28.550558 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 13 00:08:28.567252 systemd[1]: Reloading requested from client PID 1243 ('systemctl') (unit ensure-sysext.service)... May 13 00:08:28.567290 systemd[1]: Reloading... May 13 00:08:28.574498 systemd-tmpfiles[1244]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 13 00:08:28.574753 systemd-tmpfiles[1244]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 13 00:08:28.575390 systemd-tmpfiles[1244]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 13 00:08:28.575604 systemd-tmpfiles[1244]: ACLs are not supported, ignoring. May 13 00:08:28.575657 systemd-tmpfiles[1244]: ACLs are not supported, ignoring. May 13 00:08:28.578148 systemd-tmpfiles[1244]: Detected autofs mount point /boot during canonicalization of boot. May 13 00:08:28.578160 systemd-tmpfiles[1244]: Skipping /boot May 13 00:08:28.587475 systemd-tmpfiles[1244]: Detected autofs mount point /boot during canonicalization of boot. May 13 00:08:28.587491 systemd-tmpfiles[1244]: Skipping /boot May 13 00:08:28.613328 zram_generator::config[1267]: No configuration found. May 13 00:08:28.700055 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 00:08:28.737315 systemd[1]: Reloading finished in 169 ms. May 13 00:08:28.752830 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 13 00:08:28.767825 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 13 00:08:28.775538 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... May 13 00:08:28.778427 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 13 00:08:28.783543 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 13 00:08:28.787979 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 13 00:08:28.804571 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 13 00:08:28.808552 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 13 00:08:28.812605 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 13 00:08:28.816427 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 13 00:08:28.819713 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 13 00:08:28.824843 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 13 00:08:28.829690 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 13 00:08:28.831119 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 00:08:28.834150 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 13 00:08:28.848234 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 13 00:08:28.850226 systemd-udevd[1313]: Using default interface naming scheme 'v255'. May 13 00:08:28.850761 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 00:08:28.850916 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 13 00:08:28.853761 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 00:08:28.853910 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 13 00:08:28.855668 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 00:08:28.855784 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 13 00:08:28.859306 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 13 00:08:28.871237 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 13 00:08:28.879660 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 13 00:08:28.893140 augenrules[1339]: No rules May 13 00:08:28.894561 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 13 00:08:28.897011 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 13 00:08:28.899548 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 13 00:08:28.903522 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 13 00:08:28.904689 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 00:08:28.905326 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 13 00:08:28.908053 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. May 13 00:08:28.909727 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 13 00:08:28.911499 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 00:08:28.911651 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 13 00:08:28.913282 systemd[1]: modprobe@drm.service: Deactivated successfully. May 13 00:08:28.913408 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 13 00:08:28.915127 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 00:08:28.915285 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 13 00:08:28.921172 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 00:08:28.921375 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 13 00:08:28.924713 systemd[1]: Finished ensure-sysext.service. May 13 00:08:28.943409 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 13 00:08:28.944644 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 00:08:28.944724 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 13 00:08:28.963558 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 13 00:08:28.964745 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 13 00:08:28.964949 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 13 00:08:28.967777 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. May 13 00:08:28.971280 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1360) May 13 00:08:29.034385 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 13 00:08:29.036076 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 13 00:08:29.037345 systemd-networkd[1374]: lo: Link UP May 13 00:08:29.037354 systemd-networkd[1374]: lo: Gained carrier May 13 00:08:29.037739 systemd[1]: Reached target time-set.target - System Time Set. May 13 00:08:29.038186 systemd-networkd[1374]: Enumeration completed May 13 00:08:29.039476 systemd-networkd[1374]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 00:08:29.039487 systemd-networkd[1374]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 13 00:08:29.040109 systemd-networkd[1374]: eth0: Link UP May 13 00:08:29.040118 systemd-networkd[1374]: eth0: Gained carrier May 13 00:08:29.040132 systemd-networkd[1374]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 00:08:29.045435 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 13 00:08:29.046852 systemd[1]: Started systemd-networkd.service - Network Configuration. May 13 00:08:29.052288 systemd-resolved[1312]: Positive Trust Anchors: May 13 00:08:29.052310 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 13 00:08:29.052790 systemd-networkd[1374]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 00:08:29.055318 systemd-resolved[1312]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 13 00:08:29.055360 systemd-resolved[1312]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 13 00:08:29.065359 systemd-networkd[1374]: eth0: DHCPv4 address 10.0.0.16/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 13 00:08:29.066357 systemd-timesyncd[1380]: Network configuration changed, trying to establish connection. May 13 00:08:29.066915 systemd-timesyncd[1380]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 13 00:08:29.066955 systemd-timesyncd[1380]: Initial clock synchronization to Tue 2025-05-13 00:08:28.817140 UTC. May 13 00:08:29.067407 systemd-resolved[1312]: Defaulting to hostname 'linux'. May 13 00:08:29.069325 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 13 00:08:29.090553 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 00:08:29.091832 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 13 00:08:29.095853 systemd[1]: Reached target network.target - Network. May 13 00:08:29.097103 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 13 00:08:29.105802 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 13 00:08:29.115524 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 13 00:08:29.140671 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 13 00:08:29.181601 lvm[1400]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 13 00:08:29.216019 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 13 00:08:29.217622 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 13 00:08:29.218729 systemd[1]: Reached target sysinit.target - System Initialization. May 13 00:08:29.219913 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 13 00:08:29.221178 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 13 00:08:29.222667 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 13 00:08:29.223858 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 13 00:08:29.225098 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 13 00:08:29.226309 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 13 00:08:29.226347 systemd[1]: Reached target paths.target - Path Units. May 13 00:08:29.227189 systemd[1]: Reached target timers.target - Timer Units. May 13 00:08:29.229180 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 13 00:08:29.231613 systemd[1]: Starting docker.socket - Docker Socket for the API... May 13 00:08:29.243391 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 13 00:08:29.246283 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 13 00:08:29.248060 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 13 00:08:29.249280 systemd[1]: Reached target sockets.target - Socket Units. May 13 00:08:29.250189 systemd[1]: Reached target basic.target - Basic System. May 13 00:08:29.251185 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 13 00:08:29.251222 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 13 00:08:29.252175 systemd[1]: Starting containerd.service - containerd container runtime... May 13 00:08:29.254319 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 13 00:08:29.257397 lvm[1407]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 13 00:08:29.258419 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 13 00:08:29.263959 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 13 00:08:29.265012 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 13 00:08:29.268469 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 13 00:08:29.270557 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 13 00:08:29.275452 jq[1410]: false May 13 00:08:29.276168 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 13 00:08:29.281718 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 13 00:08:29.284403 extend-filesystems[1411]: Found loop3 May 13 00:08:29.284403 extend-filesystems[1411]: Found loop4 May 13 00:08:29.284403 extend-filesystems[1411]: Found loop5 May 13 00:08:29.292807 extend-filesystems[1411]: Found vda May 13 00:08:29.292807 extend-filesystems[1411]: Found vda1 May 13 00:08:29.292807 extend-filesystems[1411]: Found vda2 May 13 00:08:29.292807 extend-filesystems[1411]: Found vda3 May 13 00:08:29.292807 extend-filesystems[1411]: Found usr May 13 00:08:29.292807 extend-filesystems[1411]: Found vda4 May 13 00:08:29.292807 extend-filesystems[1411]: Found vda6 May 13 00:08:29.292807 extend-filesystems[1411]: Found vda7 May 13 00:08:29.292807 extend-filesystems[1411]: Found vda9 May 13 00:08:29.292807 extend-filesystems[1411]: Checking size of /dev/vda9 May 13 00:08:29.285715 systemd[1]: Starting systemd-logind.service - User Login Management... May 13 00:08:29.296031 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 13 00:08:29.302809 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 13 00:08:29.307498 dbus-daemon[1409]: [system] SELinux support is enabled May 13 00:08:29.310667 systemd[1]: Starting update-engine.service - Update Engine... May 13 00:08:29.312894 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 13 00:08:29.315365 extend-filesystems[1411]: Resized partition /dev/vda9 May 13 00:08:29.315135 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 13 00:08:29.319718 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1353) May 13 00:08:29.320789 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 13 00:08:29.324763 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 13 00:08:29.324939 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 13 00:08:29.325202 systemd[1]: motdgen.service: Deactivated successfully. May 13 00:08:29.325384 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 13 00:08:29.326517 extend-filesystems[1434]: resize2fs 1.47.1 (20-May-2024) May 13 00:08:29.328483 jq[1431]: true May 13 00:08:29.329096 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 13 00:08:29.329469 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 13 00:08:29.351077 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 13 00:08:29.352920 (ntainerd)[1437]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 13 00:08:29.359128 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 13 00:08:29.359321 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 13 00:08:29.361297 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 13 00:08:29.361403 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 13 00:08:29.363724 jq[1436]: true May 13 00:08:29.372771 tar[1435]: linux-arm64/helm May 13 00:08:29.382253 systemd-logind[1419]: Watching system buttons on /dev/input/event0 (Power Button) May 13 00:08:29.383983 systemd-logind[1419]: New seat seat0. May 13 00:08:29.386403 systemd[1]: Started systemd-logind.service - User Login Management. May 13 00:08:29.394199 update_engine[1430]: I20250513 00:08:29.393958 1430 main.cc:92] Flatcar Update Engine starting May 13 00:08:29.397246 systemd[1]: Started update-engine.service - Update Engine. May 13 00:08:29.399424 update_engine[1430]: I20250513 00:08:29.397593 1430 update_check_scheduler.cc:74] Next update check in 7m12s May 13 00:08:29.411611 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 13 00:08:29.452583 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 13 00:08:29.460794 locksmithd[1462]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 13 00:08:29.470317 extend-filesystems[1434]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 13 00:08:29.470317 extend-filesystems[1434]: old_desc_blocks = 1, new_desc_blocks = 1 May 13 00:08:29.470317 extend-filesystems[1434]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 13 00:08:29.480861 bash[1463]: Updated "/home/core/.ssh/authorized_keys" May 13 00:08:29.470697 systemd[1]: extend-filesystems.service: Deactivated successfully. May 13 00:08:29.481007 extend-filesystems[1411]: Resized filesystem in /dev/vda9 May 13 00:08:29.470872 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 13 00:08:29.473791 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 13 00:08:29.480988 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 13 00:08:29.597975 containerd[1437]: time="2025-05-13T00:08:29.597881640Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 May 13 00:08:29.628546 containerd[1437]: time="2025-05-13T00:08:29.628485240Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 13 00:08:29.631051 containerd[1437]: time="2025-05-13T00:08:29.630999880Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.89-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 13 00:08:29.631051 containerd[1437]: time="2025-05-13T00:08:29.631048000Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 13 00:08:29.631133 containerd[1437]: time="2025-05-13T00:08:29.631068840Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 13 00:08:29.631299 containerd[1437]: time="2025-05-13T00:08:29.631274520Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 May 13 00:08:29.631328 containerd[1437]: time="2025-05-13T00:08:29.631300760Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 May 13 00:08:29.631380 containerd[1437]: time="2025-05-13T00:08:29.631361080Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 May 13 00:08:29.631406 containerd[1437]: time="2025-05-13T00:08:29.631399200Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 13 00:08:29.631589 containerd[1437]: time="2025-05-13T00:08:29.631566640Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 13 00:08:29.631616 containerd[1437]: time="2025-05-13T00:08:29.631587000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 13 00:08:29.631616 containerd[1437]: time="2025-05-13T00:08:29.631600920Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 May 13 00:08:29.631616 containerd[1437]: time="2025-05-13T00:08:29.631610520Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 13 00:08:29.631697 containerd[1437]: time="2025-05-13T00:08:29.631680600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 13 00:08:29.631916 containerd[1437]: time="2025-05-13T00:08:29.631883920Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 13 00:08:29.632009 containerd[1437]: time="2025-05-13T00:08:29.631991240Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 13 00:08:29.632040 containerd[1437]: time="2025-05-13T00:08:29.632008200Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 13 00:08:29.632103 containerd[1437]: time="2025-05-13T00:08:29.632087320Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 13 00:08:29.632147 containerd[1437]: time="2025-05-13T00:08:29.632135160Z" level=info msg="metadata content store policy set" policy=shared May 13 00:08:29.641350 containerd[1437]: time="2025-05-13T00:08:29.641304760Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 13 00:08:29.641436 containerd[1437]: time="2025-05-13T00:08:29.641365480Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 13 00:08:29.641436 containerd[1437]: time="2025-05-13T00:08:29.641388320Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 May 13 00:08:29.641436 containerd[1437]: time="2025-05-13T00:08:29.641404720Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 May 13 00:08:29.641436 containerd[1437]: time="2025-05-13T00:08:29.641422160Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 13 00:08:29.642032 containerd[1437]: time="2025-05-13T00:08:29.641576800Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 13 00:08:29.642032 containerd[1437]: time="2025-05-13T00:08:29.641822280Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 13 00:08:29.642032 containerd[1437]: time="2025-05-13T00:08:29.641913320Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 May 13 00:08:29.642032 containerd[1437]: time="2025-05-13T00:08:29.641930440Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 May 13 00:08:29.642032 containerd[1437]: time="2025-05-13T00:08:29.641943080Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 May 13 00:08:29.642032 containerd[1437]: time="2025-05-13T00:08:29.641956480Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 13 00:08:29.642032 containerd[1437]: time="2025-05-13T00:08:29.641968920Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 13 00:08:29.642032 containerd[1437]: time="2025-05-13T00:08:29.641981920Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 13 00:08:29.642032 containerd[1437]: time="2025-05-13T00:08:29.641995720Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 13 00:08:29.642032 containerd[1437]: time="2025-05-13T00:08:29.642013880Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 13 00:08:29.642032 containerd[1437]: time="2025-05-13T00:08:29.642027120Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 13 00:08:29.642032 containerd[1437]: time="2025-05-13T00:08:29.642039120Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 13 00:08:29.642280 containerd[1437]: time="2025-05-13T00:08:29.642050760Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 13 00:08:29.642280 containerd[1437]: time="2025-05-13T00:08:29.642071600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 13 00:08:29.642280 containerd[1437]: time="2025-05-13T00:08:29.642086160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 13 00:08:29.642280 containerd[1437]: time="2025-05-13T00:08:29.642098680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 13 00:08:29.642280 containerd[1437]: time="2025-05-13T00:08:29.642110600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 13 00:08:29.642280 containerd[1437]: time="2025-05-13T00:08:29.642122440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 13 00:08:29.642280 containerd[1437]: time="2025-05-13T00:08:29.642135720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 13 00:08:29.642280 containerd[1437]: time="2025-05-13T00:08:29.642147840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 13 00:08:29.642280 containerd[1437]: time="2025-05-13T00:08:29.642160400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 13 00:08:29.642280 containerd[1437]: time="2025-05-13T00:08:29.642172960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 May 13 00:08:29.642280 containerd[1437]: time="2025-05-13T00:08:29.642187720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 May 13 00:08:29.642280 containerd[1437]: time="2025-05-13T00:08:29.642200040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 13 00:08:29.642280 containerd[1437]: time="2025-05-13T00:08:29.642220880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 May 13 00:08:29.642280 containerd[1437]: time="2025-05-13T00:08:29.642237840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 13 00:08:29.642280 containerd[1437]: time="2025-05-13T00:08:29.642254320Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 May 13 00:08:29.642538 containerd[1437]: time="2025-05-13T00:08:29.642406320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 May 13 00:08:29.642538 containerd[1437]: time="2025-05-13T00:08:29.642427440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 13 00:08:29.642538 containerd[1437]: time="2025-05-13T00:08:29.642439400Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 13 00:08:29.642588 containerd[1437]: time="2025-05-13T00:08:29.642555840Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 13 00:08:29.642588 containerd[1437]: time="2025-05-13T00:08:29.642573680Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 May 13 00:08:29.642588 containerd[1437]: time="2025-05-13T00:08:29.642584880Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 13 00:08:29.642804 containerd[1437]: time="2025-05-13T00:08:29.642597640Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 May 13 00:08:29.642832 containerd[1437]: time="2025-05-13T00:08:29.642802840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 13 00:08:29.642832 containerd[1437]: time="2025-05-13T00:08:29.642819720Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 May 13 00:08:29.642867 containerd[1437]: time="2025-05-13T00:08:29.642831440Z" level=info msg="NRI interface is disabled by configuration." May 13 00:08:29.642867 containerd[1437]: time="2025-05-13T00:08:29.642841760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 13 00:08:29.644322 containerd[1437]: time="2025-05-13T00:08:29.643298800Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 13 00:08:29.644322 containerd[1437]: time="2025-05-13T00:08:29.643368880Z" level=info msg="Connect containerd service" May 13 00:08:29.644322 containerd[1437]: time="2025-05-13T00:08:29.643452400Z" level=info msg="using legacy CRI server" May 13 00:08:29.644322 containerd[1437]: time="2025-05-13T00:08:29.643461480Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 13 00:08:29.644322 containerd[1437]: time="2025-05-13T00:08:29.643545480Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 13 00:08:29.644697 containerd[1437]: time="2025-05-13T00:08:29.644647200Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 13 00:08:29.644933 containerd[1437]: time="2025-05-13T00:08:29.644892360Z" level=info msg="Start subscribing containerd event" May 13 00:08:29.644964 containerd[1437]: time="2025-05-13T00:08:29.644942560Z" level=info msg="Start recovering state" May 13 00:08:29.645279 containerd[1437]: time="2025-05-13T00:08:29.645003960Z" level=info msg="Start event monitor" May 13 00:08:29.645279 containerd[1437]: time="2025-05-13T00:08:29.645017400Z" level=info msg="Start snapshots syncer" May 13 00:08:29.645279 containerd[1437]: time="2025-05-13T00:08:29.645026760Z" level=info msg="Start cni network conf syncer for default" May 13 00:08:29.645279 containerd[1437]: time="2025-05-13T00:08:29.645034840Z" level=info msg="Start streaming server" May 13 00:08:29.645846 containerd[1437]: time="2025-05-13T00:08:29.645825120Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 13 00:08:29.645889 containerd[1437]: time="2025-05-13T00:08:29.645880760Z" level=info msg=serving... address=/run/containerd/containerd.sock May 13 00:08:29.647455 containerd[1437]: time="2025-05-13T00:08:29.645994080Z" level=info msg="containerd successfully booted in 0.049542s" May 13 00:08:29.646024 systemd[1]: Started containerd.service - containerd container runtime. May 13 00:08:29.691135 sshd_keygen[1427]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 13 00:08:29.710729 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 13 00:08:29.727607 systemd[1]: Starting issuegen.service - Generate /run/issue... May 13 00:08:29.732436 systemd[1]: issuegen.service: Deactivated successfully. May 13 00:08:29.732741 systemd[1]: Finished issuegen.service - Generate /run/issue. May 13 00:08:29.737366 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 13 00:08:29.752301 tar[1435]: linux-arm64/LICENSE May 13 00:08:29.752301 tar[1435]: linux-arm64/README.md May 13 00:08:29.752661 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 13 00:08:29.768306 systemd[1]: Started getty@tty1.service - Getty on tty1. May 13 00:08:29.770575 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. May 13 00:08:29.771986 systemd[1]: Reached target getty.target - Login Prompts. May 13 00:08:29.773494 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 13 00:08:30.827397 systemd-networkd[1374]: eth0: Gained IPv6LL May 13 00:08:30.829946 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 13 00:08:30.831824 systemd[1]: Reached target network-online.target - Network is Online. May 13 00:08:30.849552 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... May 13 00:08:30.852201 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 00:08:30.854504 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 13 00:08:30.870983 systemd[1]: coreos-metadata.service: Deactivated successfully. May 13 00:08:30.871184 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. May 13 00:08:30.872910 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 13 00:08:30.875044 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 13 00:08:31.366012 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 00:08:31.367586 systemd[1]: Reached target multi-user.target - Multi-User System. May 13 00:08:31.372361 systemd[1]: Startup finished in 581ms (kernel) + 4.676s (initrd) + 4.022s (userspace) = 9.280s. May 13 00:08:31.372632 (kubelet)[1521]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 00:08:31.944196 kubelet[1521]: E0513 00:08:31.944127 1521 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 00:08:31.946449 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 00:08:31.946590 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 00:08:35.472171 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 13 00:08:35.473266 systemd[1]: Started sshd@0-10.0.0.16:22-10.0.0.1:33856.service - OpenSSH per-connection server daemon (10.0.0.1:33856). May 13 00:08:35.547654 sshd[1534]: Accepted publickey for core from 10.0.0.1 port 33856 ssh2: RSA SHA256:ilwLBGyeejLKSU0doRti0j2W4iQ88Tp+35jhkd0iwiU May 13 00:08:35.549456 sshd[1534]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:08:35.556932 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 13 00:08:35.570506 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 13 00:08:35.575487 systemd-logind[1419]: New session 1 of user core. May 13 00:08:35.584704 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 13 00:08:35.595561 systemd[1]: Starting user@500.service - User Manager for UID 500... May 13 00:08:35.597847 (systemd)[1538]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 13 00:08:35.678591 systemd[1538]: Queued start job for default target default.target. May 13 00:08:35.701281 systemd[1538]: Created slice app.slice - User Application Slice. May 13 00:08:35.701311 systemd[1538]: Reached target paths.target - Paths. May 13 00:08:35.701322 systemd[1538]: Reached target timers.target - Timers. May 13 00:08:35.702511 systemd[1538]: Starting dbus.socket - D-Bus User Message Bus Socket... May 13 00:08:35.712580 systemd[1538]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 13 00:08:35.712631 systemd[1538]: Reached target sockets.target - Sockets. May 13 00:08:35.712642 systemd[1538]: Reached target basic.target - Basic System. May 13 00:08:35.712675 systemd[1538]: Reached target default.target - Main User Target. May 13 00:08:35.712698 systemd[1538]: Startup finished in 107ms. May 13 00:08:35.713022 systemd[1]: Started user@500.service - User Manager for UID 500. May 13 00:08:35.714932 systemd[1]: Started session-1.scope - Session 1 of User core. May 13 00:08:35.793918 systemd[1]: Started sshd@1-10.0.0.16:22-10.0.0.1:33858.service - OpenSSH per-connection server daemon (10.0.0.1:33858). May 13 00:08:35.825629 sshd[1549]: Accepted publickey for core from 10.0.0.1 port 33858 ssh2: RSA SHA256:ilwLBGyeejLKSU0doRti0j2W4iQ88Tp+35jhkd0iwiU May 13 00:08:35.827048 sshd[1549]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:08:35.832203 systemd-logind[1419]: New session 2 of user core. May 13 00:08:35.836464 systemd[1]: Started session-2.scope - Session 2 of User core. May 13 00:08:35.890871 sshd[1549]: pam_unix(sshd:session): session closed for user core May 13 00:08:35.903737 systemd[1]: sshd@1-10.0.0.16:22-10.0.0.1:33858.service: Deactivated successfully. May 13 00:08:35.905124 systemd[1]: session-2.scope: Deactivated successfully. May 13 00:08:35.906452 systemd-logind[1419]: Session 2 logged out. Waiting for processes to exit. May 13 00:08:35.916613 systemd[1]: Started sshd@2-10.0.0.16:22-10.0.0.1:33860.service - OpenSSH per-connection server daemon (10.0.0.1:33860). May 13 00:08:35.917568 systemd-logind[1419]: Removed session 2. May 13 00:08:35.946049 sshd[1556]: Accepted publickey for core from 10.0.0.1 port 33860 ssh2: RSA SHA256:ilwLBGyeejLKSU0doRti0j2W4iQ88Tp+35jhkd0iwiU May 13 00:08:35.947442 sshd[1556]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:08:35.951554 systemd-logind[1419]: New session 3 of user core. May 13 00:08:35.962435 systemd[1]: Started session-3.scope - Session 3 of User core. May 13 00:08:36.010167 sshd[1556]: pam_unix(sshd:session): session closed for user core May 13 00:08:36.018571 systemd[1]: sshd@2-10.0.0.16:22-10.0.0.1:33860.service: Deactivated successfully. May 13 00:08:36.020478 systemd[1]: session-3.scope: Deactivated successfully. May 13 00:08:36.021965 systemd-logind[1419]: Session 3 logged out. Waiting for processes to exit. May 13 00:08:36.031151 systemd[1]: Started sshd@3-10.0.0.16:22-10.0.0.1:33866.service - OpenSSH per-connection server daemon (10.0.0.1:33866). May 13 00:08:36.032256 systemd-logind[1419]: Removed session 3. May 13 00:08:36.068706 sshd[1563]: Accepted publickey for core from 10.0.0.1 port 33866 ssh2: RSA SHA256:ilwLBGyeejLKSU0doRti0j2W4iQ88Tp+35jhkd0iwiU May 13 00:08:36.070161 sshd[1563]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:08:36.073676 systemd-logind[1419]: New session 4 of user core. May 13 00:08:36.081424 systemd[1]: Started session-4.scope - Session 4 of User core. May 13 00:08:36.132181 sshd[1563]: pam_unix(sshd:session): session closed for user core May 13 00:08:36.142750 systemd[1]: sshd@3-10.0.0.16:22-10.0.0.1:33866.service: Deactivated successfully. May 13 00:08:36.145726 systemd[1]: session-4.scope: Deactivated successfully. May 13 00:08:36.146995 systemd-logind[1419]: Session 4 logged out. Waiting for processes to exit. May 13 00:08:36.148231 systemd[1]: Started sshd@4-10.0.0.16:22-10.0.0.1:33876.service - OpenSSH per-connection server daemon (10.0.0.1:33876). May 13 00:08:36.148940 systemd-logind[1419]: Removed session 4. May 13 00:08:36.179708 sshd[1570]: Accepted publickey for core from 10.0.0.1 port 33876 ssh2: RSA SHA256:ilwLBGyeejLKSU0doRti0j2W4iQ88Tp+35jhkd0iwiU May 13 00:08:36.180850 sshd[1570]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:08:36.185283 systemd-logind[1419]: New session 5 of user core. May 13 00:08:36.196436 systemd[1]: Started session-5.scope - Session 5 of User core. May 13 00:08:36.265586 sudo[1573]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 13 00:08:36.265918 sudo[1573]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 00:08:36.287065 sudo[1573]: pam_unix(sudo:session): session closed for user root May 13 00:08:36.288810 sshd[1570]: pam_unix(sshd:session): session closed for user core May 13 00:08:36.299769 systemd[1]: sshd@4-10.0.0.16:22-10.0.0.1:33876.service: Deactivated successfully. May 13 00:08:36.301132 systemd[1]: session-5.scope: Deactivated successfully. May 13 00:08:36.302532 systemd-logind[1419]: Session 5 logged out. Waiting for processes to exit. May 13 00:08:36.313702 systemd[1]: Started sshd@5-10.0.0.16:22-10.0.0.1:33886.service - OpenSSH per-connection server daemon (10.0.0.1:33886). May 13 00:08:36.314588 systemd-logind[1419]: Removed session 5. May 13 00:08:36.348472 sshd[1578]: Accepted publickey for core from 10.0.0.1 port 33886 ssh2: RSA SHA256:ilwLBGyeejLKSU0doRti0j2W4iQ88Tp+35jhkd0iwiU May 13 00:08:36.349941 sshd[1578]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:08:36.353455 systemd-logind[1419]: New session 6 of user core. May 13 00:08:36.371444 systemd[1]: Started session-6.scope - Session 6 of User core. May 13 00:08:36.422097 sudo[1582]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 13 00:08:36.422399 sudo[1582]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 00:08:36.425378 sudo[1582]: pam_unix(sudo:session): session closed for user root May 13 00:08:36.430216 sudo[1581]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules May 13 00:08:36.430540 sudo[1581]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 00:08:36.446528 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... May 13 00:08:36.447736 auditctl[1585]: No rules May 13 00:08:36.448591 systemd[1]: audit-rules.service: Deactivated successfully. May 13 00:08:36.449350 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. May 13 00:08:36.451314 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... May 13 00:08:36.477393 augenrules[1603]: No rules May 13 00:08:36.478925 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. May 13 00:08:36.480334 sudo[1581]: pam_unix(sudo:session): session closed for user root May 13 00:08:36.482008 sshd[1578]: pam_unix(sshd:session): session closed for user core May 13 00:08:36.491854 systemd[1]: sshd@5-10.0.0.16:22-10.0.0.1:33886.service: Deactivated successfully. May 13 00:08:36.493510 systemd[1]: session-6.scope: Deactivated successfully. May 13 00:08:36.495062 systemd-logind[1419]: Session 6 logged out. Waiting for processes to exit. May 13 00:08:36.509616 systemd[1]: Started sshd@6-10.0.0.16:22-10.0.0.1:33890.service - OpenSSH per-connection server daemon (10.0.0.1:33890). May 13 00:08:36.510410 systemd-logind[1419]: Removed session 6. May 13 00:08:36.542006 sshd[1611]: Accepted publickey for core from 10.0.0.1 port 33890 ssh2: RSA SHA256:ilwLBGyeejLKSU0doRti0j2W4iQ88Tp+35jhkd0iwiU May 13 00:08:36.543330 sshd[1611]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:08:36.547340 systemd-logind[1419]: New session 7 of user core. May 13 00:08:36.557434 systemd[1]: Started session-7.scope - Session 7 of User core. May 13 00:08:36.607303 sudo[1614]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 13 00:08:36.607578 sudo[1614]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 00:08:36.909504 systemd[1]: Starting docker.service - Docker Application Container Engine... May 13 00:08:36.909590 (dockerd)[1633]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 13 00:08:37.179295 dockerd[1633]: time="2025-05-13T00:08:37.179157547Z" level=info msg="Starting up" May 13 00:08:37.317140 dockerd[1633]: time="2025-05-13T00:08:37.317094062Z" level=info msg="Loading containers: start." May 13 00:08:37.399282 kernel: Initializing XFRM netlink socket May 13 00:08:37.469079 systemd-networkd[1374]: docker0: Link UP May 13 00:08:37.487577 dockerd[1633]: time="2025-05-13T00:08:37.487513350Z" level=info msg="Loading containers: done." May 13 00:08:37.506842 dockerd[1633]: time="2025-05-13T00:08:37.506792126Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 13 00:08:37.507033 dockerd[1633]: time="2025-05-13T00:08:37.506893469Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 May 13 00:08:37.507033 dockerd[1633]: time="2025-05-13T00:08:37.506995957Z" level=info msg="Daemon has completed initialization" May 13 00:08:37.536529 dockerd[1633]: time="2025-05-13T00:08:37.536386500Z" level=info msg="API listen on /run/docker.sock" May 13 00:08:37.536514 systemd[1]: Started docker.service - Docker Application Container Engine. May 13 00:08:38.128826 containerd[1437]: time="2025-05-13T00:08:38.128772941Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\"" May 13 00:08:38.808636 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4111477635.mount: Deactivated successfully. May 13 00:08:39.731610 containerd[1437]: time="2025-05-13T00:08:39.731558780Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:08:39.732681 containerd[1437]: time="2025-05-13T00:08:39.732645961Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.8: active requests=0, bytes read=25554610" May 13 00:08:39.734283 containerd[1437]: time="2025-05-13T00:08:39.733656152Z" level=info msg="ImageCreate event name:\"sha256:ef8fb1ea7c9599dbedea6f9d5589975ebc5bf4ec72f6be6acaaec59a723a09b3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:08:39.737185 containerd[1437]: time="2025-05-13T00:08:39.737127983Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:30090db6a7d53799163ce82dae9e8ddb645fd47db93f2ec9da0cc787fd825625\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:08:39.739135 containerd[1437]: time="2025-05-13T00:08:39.738843532Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.8\" with image id \"sha256:ef8fb1ea7c9599dbedea6f9d5589975ebc5bf4ec72f6be6acaaec59a723a09b3\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:30090db6a7d53799163ce82dae9e8ddb645fd47db93f2ec9da0cc787fd825625\", size \"25551408\" in 1.610020951s" May 13 00:08:39.739135 containerd[1437]: time="2025-05-13T00:08:39.738897977Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\" returns image reference \"sha256:ef8fb1ea7c9599dbedea6f9d5589975ebc5bf4ec72f6be6acaaec59a723a09b3\"" May 13 00:08:39.741634 containerd[1437]: time="2025-05-13T00:08:39.741497829Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\"" May 13 00:08:40.832286 containerd[1437]: time="2025-05-13T00:08:40.832082845Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:08:40.833738 containerd[1437]: time="2025-05-13T00:08:40.833678395Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.8: active requests=0, bytes read=22458980" May 13 00:08:40.836292 containerd[1437]: time="2025-05-13T00:08:40.834328057Z" level=info msg="ImageCreate event name:\"sha256:ea6e6085feca75547d0422ab0536fe0d18c9ff5831de7a9d6a707c968027bb6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:08:40.837254 containerd[1437]: time="2025-05-13T00:08:40.837211069Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:29eaddc64792a689df48506e78bbc641d063ac8bb92d2e66ae2ad05977420747\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:08:40.838756 containerd[1437]: time="2025-05-13T00:08:40.838723150Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.8\" with image id \"sha256:ea6e6085feca75547d0422ab0536fe0d18c9ff5831de7a9d6a707c968027bb6a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:29eaddc64792a689df48506e78bbc641d063ac8bb92d2e66ae2ad05977420747\", size \"23900539\" in 1.097192503s" May 13 00:08:40.838797 containerd[1437]: time="2025-05-13T00:08:40.838759053Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\" returns image reference \"sha256:ea6e6085feca75547d0422ab0536fe0d18c9ff5831de7a9d6a707c968027bb6a\"" May 13 00:08:40.839186 containerd[1437]: time="2025-05-13T00:08:40.839166480Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\"" May 13 00:08:41.890337 containerd[1437]: time="2025-05-13T00:08:41.889528100Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:08:41.890337 containerd[1437]: time="2025-05-13T00:08:41.890324219Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.8: active requests=0, bytes read=17125815" May 13 00:08:41.891164 containerd[1437]: time="2025-05-13T00:08:41.891135032Z" level=info msg="ImageCreate event name:\"sha256:1d2db6ef0dd2f3e08bdfcd46afde7b755b05192841f563d8df54b807daaa7d8d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:08:41.893979 containerd[1437]: time="2025-05-13T00:08:41.893916106Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:22994a2632e81059720480b9f6bdeb133b08d58492d0b36dfd6e9768b159b22a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:08:41.895430 containerd[1437]: time="2025-05-13T00:08:41.895132623Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.8\" with image id \"sha256:1d2db6ef0dd2f3e08bdfcd46afde7b755b05192841f563d8df54b807daaa7d8d\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:22994a2632e81059720480b9f6bdeb133b08d58492d0b36dfd6e9768b159b22a\", size \"18567392\" in 1.055935507s" May 13 00:08:41.895430 containerd[1437]: time="2025-05-13T00:08:41.895167530Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\" returns image reference \"sha256:1d2db6ef0dd2f3e08bdfcd46afde7b755b05192841f563d8df54b807daaa7d8d\"" May 13 00:08:41.895817 containerd[1437]: time="2025-05-13T00:08:41.895790621Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\"" May 13 00:08:42.197168 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 13 00:08:42.209490 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 00:08:42.300670 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 00:08:42.305221 (kubelet)[1854]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 00:08:42.343488 kubelet[1854]: E0513 00:08:42.343421 1854 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 00:08:42.346590 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 00:08:42.346852 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 00:08:42.879314 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount187589218.mount: Deactivated successfully. May 13 00:08:43.250886 containerd[1437]: time="2025-05-13T00:08:43.250762144Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:08:43.251411 containerd[1437]: time="2025-05-13T00:08:43.251374039Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.8: active requests=0, bytes read=26871919" May 13 00:08:43.252048 containerd[1437]: time="2025-05-13T00:08:43.252014019Z" level=info msg="ImageCreate event name:\"sha256:c5361ece77e80334cd5fb082c0b678cb3244f5834ecacea1719ae6b38b465581\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:08:43.254053 containerd[1437]: time="2025-05-13T00:08:43.253999279Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:dd0c9a37670f209947b1ed880f06a2e93e1d41da78c037f52f94b13858769838\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:08:43.255319 containerd[1437]: time="2025-05-13T00:08:43.254816160Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.8\" with image id \"sha256:c5361ece77e80334cd5fb082c0b678cb3244f5834ecacea1719ae6b38b465581\", repo tag \"registry.k8s.io/kube-proxy:v1.31.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:dd0c9a37670f209947b1ed880f06a2e93e1d41da78c037f52f94b13858769838\", size \"26870936\" in 1.358992044s" May 13 00:08:43.255319 containerd[1437]: time="2025-05-13T00:08:43.254850848Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\" returns image reference \"sha256:c5361ece77e80334cd5fb082c0b678cb3244f5834ecacea1719ae6b38b465581\"" May 13 00:08:43.255450 containerd[1437]: time="2025-05-13T00:08:43.255418350Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" May 13 00:08:43.880380 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3686888087.mount: Deactivated successfully. May 13 00:08:44.518731 containerd[1437]: time="2025-05-13T00:08:44.518679541Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:08:44.519757 containerd[1437]: time="2025-05-13T00:08:44.519474018Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485383" May 13 00:08:44.520820 containerd[1437]: time="2025-05-13T00:08:44.520780101Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:08:44.523970 containerd[1437]: time="2025-05-13T00:08:44.523915378Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:08:44.525144 containerd[1437]: time="2025-05-13T00:08:44.525103392Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.269656553s" May 13 00:08:44.525195 containerd[1437]: time="2025-05-13T00:08:44.525144036Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" May 13 00:08:44.525629 containerd[1437]: time="2025-05-13T00:08:44.525568901Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 13 00:08:45.101323 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount270011231.mount: Deactivated successfully. May 13 00:08:45.107705 containerd[1437]: time="2025-05-13T00:08:45.107648327Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:08:45.108189 containerd[1437]: time="2025-05-13T00:08:45.108144548Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" May 13 00:08:45.109090 containerd[1437]: time="2025-05-13T00:08:45.109045976Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:08:45.111859 containerd[1437]: time="2025-05-13T00:08:45.111403923Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:08:45.112486 containerd[1437]: time="2025-05-13T00:08:45.112456671Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 586.853649ms" May 13 00:08:45.112557 containerd[1437]: time="2025-05-13T00:08:45.112493834Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" May 13 00:08:45.112961 containerd[1437]: time="2025-05-13T00:08:45.112921346Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" May 13 00:08:45.584171 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1334693930.mount: Deactivated successfully. May 13 00:08:47.098126 containerd[1437]: time="2025-05-13T00:08:47.098060983Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:08:47.098625 containerd[1437]: time="2025-05-13T00:08:47.098592025Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66406467" May 13 00:08:47.099533 containerd[1437]: time="2025-05-13T00:08:47.099500327Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:08:47.102770 containerd[1437]: time="2025-05-13T00:08:47.102729082Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:08:47.104417 containerd[1437]: time="2025-05-13T00:08:47.104279347Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 1.991305957s" May 13 00:08:47.104417 containerd[1437]: time="2025-05-13T00:08:47.104319059Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" May 13 00:08:52.521272 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 13 00:08:52.530527 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 00:08:52.540083 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 13 00:08:52.540174 systemd[1]: kubelet.service: Failed with result 'signal'. May 13 00:08:52.540523 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 13 00:08:52.543531 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 00:08:52.566997 systemd[1]: Reloading requested from client PID 2005 ('systemctl') (unit session-7.scope)... May 13 00:08:52.567015 systemd[1]: Reloading... May 13 00:08:52.635303 zram_generator::config[2042]: No configuration found. May 13 00:08:52.779093 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 00:08:52.833235 systemd[1]: Reloading finished in 265 ms. May 13 00:08:52.870066 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 13 00:08:52.873114 systemd[1]: kubelet.service: Deactivated successfully. May 13 00:08:52.873317 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 13 00:08:52.874872 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 00:08:52.972244 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 00:08:52.976556 (kubelet)[2091]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 13 00:08:53.011306 kubelet[2091]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 00:08:53.011306 kubelet[2091]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 13 00:08:53.011306 kubelet[2091]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 00:08:53.011704 kubelet[2091]: I0513 00:08:53.011442 2091 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 13 00:08:53.523143 kubelet[2091]: I0513 00:08:53.522067 2091 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" May 13 00:08:53.523143 kubelet[2091]: I0513 00:08:53.522102 2091 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 13 00:08:53.523143 kubelet[2091]: I0513 00:08:53.522536 2091 server.go:929] "Client rotation is on, will bootstrap in background" May 13 00:08:53.564826 kubelet[2091]: E0513 00:08:53.564786 2091 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.16:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.16:6443: connect: connection refused" logger="UnhandledError" May 13 00:08:53.565887 kubelet[2091]: I0513 00:08:53.565865 2091 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 13 00:08:53.572454 kubelet[2091]: E0513 00:08:53.572409 2091 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 13 00:08:53.572454 kubelet[2091]: I0513 00:08:53.572454 2091 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 13 00:08:53.575694 kubelet[2091]: I0513 00:08:53.575667 2091 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 13 00:08:53.576563 kubelet[2091]: I0513 00:08:53.576527 2091 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 13 00:08:53.576836 kubelet[2091]: I0513 00:08:53.576689 2091 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 13 00:08:53.577022 kubelet[2091]: I0513 00:08:53.576828 2091 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 13 00:08:53.577167 kubelet[2091]: I0513 00:08:53.577147 2091 topology_manager.go:138] "Creating topology manager with none policy" May 13 00:08:53.577167 kubelet[2091]: I0513 00:08:53.577159 2091 container_manager_linux.go:300] "Creating device plugin manager" May 13 00:08:53.577381 kubelet[2091]: I0513 00:08:53.577359 2091 state_mem.go:36] "Initialized new in-memory state store" May 13 00:08:53.580695 kubelet[2091]: I0513 00:08:53.580666 2091 kubelet.go:408] "Attempting to sync node with API server" May 13 00:08:53.580695 kubelet[2091]: I0513 00:08:53.580697 2091 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 13 00:08:53.581299 kubelet[2091]: I0513 00:08:53.581286 2091 kubelet.go:314] "Adding apiserver pod source" May 13 00:08:53.581340 kubelet[2091]: I0513 00:08:53.581315 2091 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 13 00:08:53.583958 kubelet[2091]: I0513 00:08:53.583705 2091 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" May 13 00:08:53.583958 kubelet[2091]: W0513 00:08:53.583840 2091 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.16:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.16:6443: connect: connection refused May 13 00:08:53.583958 kubelet[2091]: E0513 00:08:53.583893 2091 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.16:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.16:6443: connect: connection refused" logger="UnhandledError" May 13 00:08:53.584910 kubelet[2091]: W0513 00:08:53.584853 2091 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.16:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.16:6443: connect: connection refused May 13 00:08:53.584953 kubelet[2091]: E0513 00:08:53.584922 2091 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.16:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.16:6443: connect: connection refused" logger="UnhandledError" May 13 00:08:53.585691 kubelet[2091]: I0513 00:08:53.585667 2091 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 13 00:08:53.586370 kubelet[2091]: W0513 00:08:53.586349 2091 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 13 00:08:53.587626 kubelet[2091]: I0513 00:08:53.587601 2091 server.go:1269] "Started kubelet" May 13 00:08:53.587792 kubelet[2091]: I0513 00:08:53.587745 2091 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 13 00:08:53.588064 kubelet[2091]: I0513 00:08:53.588041 2091 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 13 00:08:53.588435 kubelet[2091]: I0513 00:08:53.588405 2091 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 13 00:08:53.590142 kubelet[2091]: I0513 00:08:53.590113 2091 server.go:460] "Adding debug handlers to kubelet server" May 13 00:08:53.592955 kubelet[2091]: I0513 00:08:53.592931 2091 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 13 00:08:53.593155 kubelet[2091]: I0513 00:08:53.593142 2091 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 13 00:08:53.594148 kubelet[2091]: E0513 00:08:53.591446 2091 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.16:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.16:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183eed95e95ea44e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-13 00:08:53.587534926 +0000 UTC m=+0.607844843,LastTimestamp:2025-05-13 00:08:53.587534926 +0000 UTC m=+0.607844843,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 13 00:08:53.594357 kubelet[2091]: E0513 00:08:53.594291 2091 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 13 00:08:53.594357 kubelet[2091]: E0513 00:08:53.594305 2091 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 00:08:53.596026 kubelet[2091]: I0513 00:08:53.594455 2091 volume_manager.go:289] "Starting Kubelet Volume Manager" May 13 00:08:53.596026 kubelet[2091]: I0513 00:08:53.594614 2091 desired_state_of_world_populator.go:146] "Desired state populator starts to run" May 13 00:08:53.596026 kubelet[2091]: I0513 00:08:53.594705 2091 reconciler.go:26] "Reconciler: start to sync state" May 13 00:08:53.596026 kubelet[2091]: W0513 00:08:53.595111 2091 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.16:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.16:6443: connect: connection refused May 13 00:08:53.596026 kubelet[2091]: E0513 00:08:53.595156 2091 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.16:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.16:6443: connect: connection refused" logger="UnhandledError" May 13 00:08:53.596026 kubelet[2091]: I0513 00:08:53.595351 2091 factory.go:221] Registration of the systemd container factory successfully May 13 00:08:53.596026 kubelet[2091]: I0513 00:08:53.595432 2091 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 13 00:08:53.596026 kubelet[2091]: E0513 00:08:53.595426 2091 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.16:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.16:6443: connect: connection refused" interval="200ms" May 13 00:08:53.596434 kubelet[2091]: I0513 00:08:53.596411 2091 factory.go:221] Registration of the containerd container factory successfully May 13 00:08:53.608521 kubelet[2091]: I0513 00:08:53.608456 2091 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 13 00:08:53.609406 kubelet[2091]: I0513 00:08:53.609381 2091 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 13 00:08:53.609406 kubelet[2091]: I0513 00:08:53.609405 2091 status_manager.go:217] "Starting to sync pod status with apiserver" May 13 00:08:53.609460 kubelet[2091]: I0513 00:08:53.609421 2091 kubelet.go:2321] "Starting kubelet main sync loop" May 13 00:08:53.609490 kubelet[2091]: E0513 00:08:53.609459 2091 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 13 00:08:53.613336 kubelet[2091]: I0513 00:08:53.613302 2091 cpu_manager.go:214] "Starting CPU manager" policy="none" May 13 00:08:53.613336 kubelet[2091]: I0513 00:08:53.613320 2091 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 13 00:08:53.613336 kubelet[2091]: I0513 00:08:53.613339 2091 state_mem.go:36] "Initialized new in-memory state store" May 13 00:08:53.614062 kubelet[2091]: W0513 00:08:53.614011 2091 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.16:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.16:6443: connect: connection refused May 13 00:08:53.614106 kubelet[2091]: E0513 00:08:53.614072 2091 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.16:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.16:6443: connect: connection refused" logger="UnhandledError" May 13 00:08:53.694904 kubelet[2091]: E0513 00:08:53.694865 2091 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 00:08:53.710127 kubelet[2091]: E0513 00:08:53.710107 2091 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 13 00:08:53.795447 kubelet[2091]: E0513 00:08:53.795317 2091 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 00:08:53.796833 kubelet[2091]: E0513 00:08:53.796759 2091 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.16:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.16:6443: connect: connection refused" interval="400ms" May 13 00:08:53.896171 kubelet[2091]: E0513 00:08:53.896134 2091 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 00:08:53.910496 kubelet[2091]: E0513 00:08:53.910461 2091 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 13 00:08:53.928229 kubelet[2091]: I0513 00:08:53.928197 2091 policy_none.go:49] "None policy: Start" May 13 00:08:53.929026 kubelet[2091]: I0513 00:08:53.928999 2091 memory_manager.go:170] "Starting memorymanager" policy="None" May 13 00:08:53.929026 kubelet[2091]: I0513 00:08:53.929023 2091 state_mem.go:35] "Initializing new in-memory state store" May 13 00:08:53.996463 kubelet[2091]: E0513 00:08:53.996407 2091 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 00:08:54.061399 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 13 00:08:54.073254 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 13 00:08:54.076190 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 13 00:08:54.089141 kubelet[2091]: I0513 00:08:54.089108 2091 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 13 00:08:54.089481 kubelet[2091]: I0513 00:08:54.089366 2091 eviction_manager.go:189] "Eviction manager: starting control loop" May 13 00:08:54.089481 kubelet[2091]: I0513 00:08:54.089378 2091 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 13 00:08:54.089895 kubelet[2091]: I0513 00:08:54.089852 2091 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 13 00:08:54.090735 kubelet[2091]: E0513 00:08:54.090699 2091 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 13 00:08:54.190747 kubelet[2091]: I0513 00:08:54.190692 2091 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 13 00:08:54.191112 kubelet[2091]: E0513 00:08:54.191088 2091 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.16:6443/api/v1/nodes\": dial tcp 10.0.0.16:6443: connect: connection refused" node="localhost" May 13 00:08:54.199512 kubelet[2091]: E0513 00:08:54.199460 2091 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.16:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.16:6443: connect: connection refused" interval="800ms" May 13 00:08:54.321301 systemd[1]: Created slice kubepods-burstable-podd7d116320686b8d09e8c465a51e62ca3.slice - libcontainer container kubepods-burstable-podd7d116320686b8d09e8c465a51e62ca3.slice. May 13 00:08:54.344553 systemd[1]: Created slice kubepods-burstable-podd4a6b755cb4739fbca401212ebb82b6d.slice - libcontainer container kubepods-burstable-podd4a6b755cb4739fbca401212ebb82b6d.slice. May 13 00:08:54.347649 systemd[1]: Created slice kubepods-burstable-pod0613557c150e4f35d1f3f822b5f32ff1.slice - libcontainer container kubepods-burstable-pod0613557c150e4f35d1f3f822b5f32ff1.slice. May 13 00:08:54.393172 kubelet[2091]: I0513 00:08:54.393136 2091 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 13 00:08:54.393521 kubelet[2091]: E0513 00:08:54.393494 2091 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.16:6443/api/v1/nodes\": dial tcp 10.0.0.16:6443: connect: connection refused" node="localhost" May 13 00:08:54.398845 kubelet[2091]: I0513 00:08:54.398798 2091 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:08:54.398845 kubelet[2091]: I0513 00:08:54.398846 2091 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:08:54.398965 kubelet[2091]: I0513 00:08:54.398867 2091 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:08:54.398965 kubelet[2091]: I0513 00:08:54.398932 2091 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:08:54.399009 kubelet[2091]: I0513 00:08:54.398969 2091 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0613557c150e4f35d1f3f822b5f32ff1-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0613557c150e4f35d1f3f822b5f32ff1\") " pod="kube-system/kube-scheduler-localhost" May 13 00:08:54.399009 kubelet[2091]: I0513 00:08:54.398986 2091 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d7d116320686b8d09e8c465a51e62ca3-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"d7d116320686b8d09e8c465a51e62ca3\") " pod="kube-system/kube-apiserver-localhost" May 13 00:08:54.399050 kubelet[2091]: I0513 00:08:54.399012 2091 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d7d116320686b8d09e8c465a51e62ca3-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"d7d116320686b8d09e8c465a51e62ca3\") " pod="kube-system/kube-apiserver-localhost" May 13 00:08:54.399050 kubelet[2091]: I0513 00:08:54.399031 2091 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d7d116320686b8d09e8c465a51e62ca3-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"d7d116320686b8d09e8c465a51e62ca3\") " pod="kube-system/kube-apiserver-localhost" May 13 00:08:54.399050 kubelet[2091]: I0513 00:08:54.399047 2091 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:08:54.643206 kubelet[2091]: E0513 00:08:54.643090 2091 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:08:54.643734 containerd[1437]: time="2025-05-13T00:08:54.643695147Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:d7d116320686b8d09e8c465a51e62ca3,Namespace:kube-system,Attempt:0,}" May 13 00:08:54.646984 kubelet[2091]: E0513 00:08:54.646954 2091 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:08:54.647405 containerd[1437]: time="2025-05-13T00:08:54.647364455Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d4a6b755cb4739fbca401212ebb82b6d,Namespace:kube-system,Attempt:0,}" May 13 00:08:54.649668 kubelet[2091]: E0513 00:08:54.649645 2091 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:08:54.649993 containerd[1437]: time="2025-05-13T00:08:54.649965078Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0613557c150e4f35d1f3f822b5f32ff1,Namespace:kube-system,Attempt:0,}" May 13 00:08:54.734669 kubelet[2091]: W0513 00:08:54.734564 2091 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.16:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.16:6443: connect: connection refused May 13 00:08:54.734669 kubelet[2091]: E0513 00:08:54.734633 2091 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.16:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.16:6443: connect: connection refused" logger="UnhandledError" May 13 00:08:54.776430 kubelet[2091]: W0513 00:08:54.776341 2091 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.16:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.16:6443: connect: connection refused May 13 00:08:54.776619 kubelet[2091]: E0513 00:08:54.776440 2091 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.16:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.16:6443: connect: connection refused" logger="UnhandledError" May 13 00:08:54.794798 kubelet[2091]: I0513 00:08:54.794773 2091 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 13 00:08:54.795135 kubelet[2091]: E0513 00:08:54.795101 2091 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.16:6443/api/v1/nodes\": dial tcp 10.0.0.16:6443: connect: connection refused" node="localhost" May 13 00:08:54.943332 kubelet[2091]: W0513 00:08:54.943138 2091 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.16:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.16:6443: connect: connection refused May 13 00:08:54.943332 kubelet[2091]: E0513 00:08:54.943210 2091 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.16:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.16:6443: connect: connection refused" logger="UnhandledError" May 13 00:08:54.999933 kubelet[2091]: E0513 00:08:54.999879 2091 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.16:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.16:6443: connect: connection refused" interval="1.6s" May 13 00:08:55.168430 kubelet[2091]: W0513 00:08:55.168317 2091 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.16:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.16:6443: connect: connection refused May 13 00:08:55.168430 kubelet[2091]: E0513 00:08:55.168398 2091 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.16:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.16:6443: connect: connection refused" logger="UnhandledError" May 13 00:08:55.173647 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount355693101.mount: Deactivated successfully. May 13 00:08:55.177832 containerd[1437]: time="2025-05-13T00:08:55.177752423Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 00:08:55.178518 containerd[1437]: time="2025-05-13T00:08:55.178346005Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 13 00:08:55.179028 containerd[1437]: time="2025-05-13T00:08:55.178991329Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 00:08:55.179800 containerd[1437]: time="2025-05-13T00:08:55.179778056Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 00:08:55.180592 containerd[1437]: time="2025-05-13T00:08:55.180542009Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 00:08:55.180719 containerd[1437]: time="2025-05-13T00:08:55.180688886Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 13 00:08:55.181506 containerd[1437]: time="2025-05-13T00:08:55.181418836Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" May 13 00:08:55.184788 containerd[1437]: time="2025-05-13T00:08:55.184747784Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 537.315293ms" May 13 00:08:55.186158 containerd[1437]: time="2025-05-13T00:08:55.186125176Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 00:08:55.187343 containerd[1437]: time="2025-05-13T00:08:55.187172535Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 543.397687ms" May 13 00:08:55.189519 containerd[1437]: time="2025-05-13T00:08:55.189440659Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 539.412739ms" May 13 00:08:55.377412 containerd[1437]: time="2025-05-13T00:08:55.377141508Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:08:55.377412 containerd[1437]: time="2025-05-13T00:08:55.377210831Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:08:55.377554 containerd[1437]: time="2025-05-13T00:08:55.377200443Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:08:55.377628 containerd[1437]: time="2025-05-13T00:08:55.377236483Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:08:55.377628 containerd[1437]: time="2025-05-13T00:08:55.377416044Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:08:55.378360 containerd[1437]: time="2025-05-13T00:08:55.377254503Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:08:55.378466 containerd[1437]: time="2025-05-13T00:08:55.378347091Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:08:55.378534 containerd[1437]: time="2025-05-13T00:08:55.378448579Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:08:55.379031 containerd[1437]: time="2025-05-13T00:08:55.378940313Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:08:55.379176 containerd[1437]: time="2025-05-13T00:08:55.379140691Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:08:55.379767 containerd[1437]: time="2025-05-13T00:08:55.379719209Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:08:55.380018 containerd[1437]: time="2025-05-13T00:08:55.379953310Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:08:55.402461 systemd[1]: Started cri-containerd-c0764191cf48ca6070ac2e32f9253a649b07527ac286ecc2f6c039df15a67340.scope - libcontainer container c0764191cf48ca6070ac2e32f9253a649b07527ac286ecc2f6c039df15a67340. May 13 00:08:55.403914 systemd[1]: Started cri-containerd-d5e26d31c35ac9606024e89e4bcda6d89b2d8241ac697a58adde56417da2dc40.scope - libcontainer container d5e26d31c35ac9606024e89e4bcda6d89b2d8241ac697a58adde56417da2dc40. May 13 00:08:55.405923 systemd[1]: Started cri-containerd-fbdd7e38ea6a705767e567f233a08361e1a7dac0cdd4a47b4ab01adcb4429ebd.scope - libcontainer container fbdd7e38ea6a705767e567f233a08361e1a7dac0cdd4a47b4ab01adcb4429ebd. May 13 00:08:55.440633 containerd[1437]: time="2025-05-13T00:08:55.440578426Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d4a6b755cb4739fbca401212ebb82b6d,Namespace:kube-system,Attempt:0,} returns sandbox id \"c0764191cf48ca6070ac2e32f9253a649b07527ac286ecc2f6c039df15a67340\"" May 13 00:08:55.446227 containerd[1437]: time="2025-05-13T00:08:55.446183370Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:d7d116320686b8d09e8c465a51e62ca3,Namespace:kube-system,Attempt:0,} returns sandbox id \"fbdd7e38ea6a705767e567f233a08361e1a7dac0cdd4a47b4ab01adcb4429ebd\"" May 13 00:08:55.452540 kubelet[2091]: E0513 00:08:55.452510 2091 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:08:55.452797 kubelet[2091]: E0513 00:08:55.452770 2091 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:08:55.455169 containerd[1437]: time="2025-05-13T00:08:55.454838130Z" level=info msg="CreateContainer within sandbox \"fbdd7e38ea6a705767e567f233a08361e1a7dac0cdd4a47b4ab01adcb4429ebd\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 13 00:08:55.455169 containerd[1437]: time="2025-05-13T00:08:55.455094606Z" level=info msg="CreateContainer within sandbox \"c0764191cf48ca6070ac2e32f9253a649b07527ac286ecc2f6c039df15a67340\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 13 00:08:55.455301 containerd[1437]: time="2025-05-13T00:08:55.455257505Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0613557c150e4f35d1f3f822b5f32ff1,Namespace:kube-system,Attempt:0,} returns sandbox id \"d5e26d31c35ac9606024e89e4bcda6d89b2d8241ac697a58adde56417da2dc40\"" May 13 00:08:55.457086 kubelet[2091]: E0513 00:08:55.457053 2091 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:08:55.459203 containerd[1437]: time="2025-05-13T00:08:55.458817357Z" level=info msg="CreateContainer within sandbox \"d5e26d31c35ac9606024e89e4bcda6d89b2d8241ac697a58adde56417da2dc40\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 13 00:08:55.476635 containerd[1437]: time="2025-05-13T00:08:55.476586128Z" level=info msg="CreateContainer within sandbox \"fbdd7e38ea6a705767e567f233a08361e1a7dac0cdd4a47b4ab01adcb4429ebd\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"28978e1cf03fed0ff19fd8a30f13611532c04793232fe57a6cdd8f2895b31227\"" May 13 00:08:55.477604 containerd[1437]: time="2025-05-13T00:08:55.477285672Z" level=info msg="StartContainer for \"28978e1cf03fed0ff19fd8a30f13611532c04793232fe57a6cdd8f2895b31227\"" May 13 00:08:55.479492 containerd[1437]: time="2025-05-13T00:08:55.479447075Z" level=info msg="CreateContainer within sandbox \"c0764191cf48ca6070ac2e32f9253a649b07527ac286ecc2f6c039df15a67340\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"43173ee99e208ea8fe7f47800bee2a589f4bba6cd8bcbf673393f809b4b57038\"" May 13 00:08:55.479959 containerd[1437]: time="2025-05-13T00:08:55.479933535Z" level=info msg="StartContainer for \"43173ee99e208ea8fe7f47800bee2a589f4bba6cd8bcbf673393f809b4b57038\"" May 13 00:08:55.481155 containerd[1437]: time="2025-05-13T00:08:55.481023446Z" level=info msg="CreateContainer within sandbox \"d5e26d31c35ac9606024e89e4bcda6d89b2d8241ac697a58adde56417da2dc40\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"481f219c21b01f145fba86d7d1d7538079cf3d7eb9296ce42ef92f47fc009406\"" May 13 00:08:55.481512 containerd[1437]: time="2025-05-13T00:08:55.481465156Z" level=info msg="StartContainer for \"481f219c21b01f145fba86d7d1d7538079cf3d7eb9296ce42ef92f47fc009406\"" May 13 00:08:55.505446 systemd[1]: Started cri-containerd-28978e1cf03fed0ff19fd8a30f13611532c04793232fe57a6cdd8f2895b31227.scope - libcontainer container 28978e1cf03fed0ff19fd8a30f13611532c04793232fe57a6cdd8f2895b31227. May 13 00:08:55.509461 systemd[1]: Started cri-containerd-43173ee99e208ea8fe7f47800bee2a589f4bba6cd8bcbf673393f809b4b57038.scope - libcontainer container 43173ee99e208ea8fe7f47800bee2a589f4bba6cd8bcbf673393f809b4b57038. May 13 00:08:55.510844 systemd[1]: Started cri-containerd-481f219c21b01f145fba86d7d1d7538079cf3d7eb9296ce42ef92f47fc009406.scope - libcontainer container 481f219c21b01f145fba86d7d1d7538079cf3d7eb9296ce42ef92f47fc009406. May 13 00:08:55.547799 containerd[1437]: time="2025-05-13T00:08:55.547741805Z" level=info msg="StartContainer for \"28978e1cf03fed0ff19fd8a30f13611532c04793232fe57a6cdd8f2895b31227\" returns successfully" May 13 00:08:55.563215 containerd[1437]: time="2025-05-13T00:08:55.563160383Z" level=info msg="StartContainer for \"43173ee99e208ea8fe7f47800bee2a589f4bba6cd8bcbf673393f809b4b57038\" returns successfully" May 13 00:08:55.586179 containerd[1437]: time="2025-05-13T00:08:55.586128907Z" level=info msg="StartContainer for \"481f219c21b01f145fba86d7d1d7538079cf3d7eb9296ce42ef92f47fc009406\" returns successfully" May 13 00:08:55.597634 kubelet[2091]: I0513 00:08:55.597599 2091 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 13 00:08:55.597904 kubelet[2091]: E0513 00:08:55.597881 2091 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.16:6443/api/v1/nodes\": dial tcp 10.0.0.16:6443: connect: connection refused" node="localhost" May 13 00:08:55.626967 kubelet[2091]: E0513 00:08:55.626818 2091 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:08:55.638734 kubelet[2091]: E0513 00:08:55.638377 2091 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:08:55.640189 kubelet[2091]: E0513 00:08:55.639991 2091 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:08:55.644450 kubelet[2091]: E0513 00:08:55.644414 2091 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.16:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.16:6443: connect: connection refused" logger="UnhandledError" May 13 00:08:56.641727 kubelet[2091]: E0513 00:08:56.641696 2091 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:08:56.950314 kubelet[2091]: E0513 00:08:56.950140 2091 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:08:57.199757 kubelet[2091]: I0513 00:08:57.199504 2091 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 13 00:08:58.192926 kubelet[2091]: E0513 00:08:58.192881 2091 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" May 13 00:08:58.271885 kubelet[2091]: I0513 00:08:58.271787 2091 kubelet_node_status.go:75] "Successfully registered node" node="localhost" May 13 00:08:58.587471 kubelet[2091]: I0513 00:08:58.587365 2091 apiserver.go:52] "Watching apiserver" May 13 00:08:58.594806 kubelet[2091]: I0513 00:08:58.594759 2091 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" May 13 00:09:00.432104 systemd[1]: Reloading requested from client PID 2370 ('systemctl') (unit session-7.scope)... May 13 00:09:00.432121 systemd[1]: Reloading... May 13 00:09:00.489296 zram_generator::config[2409]: No configuration found. May 13 00:09:00.581760 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 00:09:00.649776 systemd[1]: Reloading finished in 217 ms. May 13 00:09:00.671638 kubelet[2091]: E0513 00:09:00.671606 2091 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:09:00.682637 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 13 00:09:00.697583 systemd[1]: kubelet.service: Deactivated successfully. May 13 00:09:00.697828 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 13 00:09:00.697888 systemd[1]: kubelet.service: Consumed 1.009s CPU time, 116.9M memory peak, 0B memory swap peak. May 13 00:09:00.709549 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 00:09:00.801844 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 00:09:00.807457 (kubelet)[2452]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 13 00:09:00.849164 kubelet[2452]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 00:09:00.849164 kubelet[2452]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 13 00:09:00.849164 kubelet[2452]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 00:09:00.849542 kubelet[2452]: I0513 00:09:00.849186 2452 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 13 00:09:00.854324 kubelet[2452]: I0513 00:09:00.854287 2452 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" May 13 00:09:00.854324 kubelet[2452]: I0513 00:09:00.854317 2452 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 13 00:09:00.854537 kubelet[2452]: I0513 00:09:00.854514 2452 server.go:929] "Client rotation is on, will bootstrap in background" May 13 00:09:00.855835 kubelet[2452]: I0513 00:09:00.855812 2452 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 13 00:09:00.857906 kubelet[2452]: I0513 00:09:00.857802 2452 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 13 00:09:00.860801 kubelet[2452]: E0513 00:09:00.860768 2452 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 13 00:09:00.860801 kubelet[2452]: I0513 00:09:00.860803 2452 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 13 00:09:00.865068 kubelet[2452]: I0513 00:09:00.865023 2452 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 13 00:09:00.865175 kubelet[2452]: I0513 00:09:00.865160 2452 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 13 00:09:00.865301 kubelet[2452]: I0513 00:09:00.865276 2452 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 13 00:09:00.865466 kubelet[2452]: I0513 00:09:00.865303 2452 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 13 00:09:00.865539 kubelet[2452]: I0513 00:09:00.865470 2452 topology_manager.go:138] "Creating topology manager with none policy" May 13 00:09:00.865539 kubelet[2452]: I0513 00:09:00.865480 2452 container_manager_linux.go:300] "Creating device plugin manager" May 13 00:09:00.865539 kubelet[2452]: I0513 00:09:00.865509 2452 state_mem.go:36] "Initialized new in-memory state store" May 13 00:09:00.865616 kubelet[2452]: I0513 00:09:00.865609 2452 kubelet.go:408] "Attempting to sync node with API server" May 13 00:09:00.865640 kubelet[2452]: I0513 00:09:00.865621 2452 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 13 00:09:00.865662 kubelet[2452]: I0513 00:09:00.865641 2452 kubelet.go:314] "Adding apiserver pod source" May 13 00:09:00.865662 kubelet[2452]: I0513 00:09:00.865649 2452 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 13 00:09:00.866531 kubelet[2452]: I0513 00:09:00.866242 2452 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" May 13 00:09:00.866734 kubelet[2452]: I0513 00:09:00.866704 2452 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 13 00:09:00.871275 kubelet[2452]: I0513 00:09:00.867055 2452 server.go:1269] "Started kubelet" May 13 00:09:00.871275 kubelet[2452]: I0513 00:09:00.867545 2452 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 13 00:09:00.871275 kubelet[2452]: I0513 00:09:00.867797 2452 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 13 00:09:00.871275 kubelet[2452]: I0513 00:09:00.867403 2452 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 13 00:09:00.871275 kubelet[2452]: I0513 00:09:00.869096 2452 server.go:460] "Adding debug handlers to kubelet server" May 13 00:09:00.875270 kubelet[2452]: I0513 00:09:00.874109 2452 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 13 00:09:00.878272 kubelet[2452]: I0513 00:09:00.878061 2452 volume_manager.go:289] "Starting Kubelet Volume Manager" May 13 00:09:00.878272 kubelet[2452]: I0513 00:09:00.878148 2452 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 13 00:09:00.879223 kubelet[2452]: I0513 00:09:00.879181 2452 desired_state_of_world_populator.go:146] "Desired state populator starts to run" May 13 00:09:00.879352 kubelet[2452]: I0513 00:09:00.879336 2452 reconciler.go:26] "Reconciler: start to sync state" May 13 00:09:00.880903 kubelet[2452]: E0513 00:09:00.880750 2452 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 00:09:00.882473 kubelet[2452]: I0513 00:09:00.882336 2452 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 13 00:09:00.886681 kubelet[2452]: E0513 00:09:00.886631 2452 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 13 00:09:00.887431 kubelet[2452]: I0513 00:09:00.887408 2452 factory.go:221] Registration of the containerd container factory successfully May 13 00:09:00.887431 kubelet[2452]: I0513 00:09:00.887429 2452 factory.go:221] Registration of the systemd container factory successfully May 13 00:09:00.892589 kubelet[2452]: I0513 00:09:00.892464 2452 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 13 00:09:00.894079 kubelet[2452]: I0513 00:09:00.894040 2452 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 13 00:09:00.894079 kubelet[2452]: I0513 00:09:00.894066 2452 status_manager.go:217] "Starting to sync pod status with apiserver" May 13 00:09:00.894079 kubelet[2452]: I0513 00:09:00.894084 2452 kubelet.go:2321] "Starting kubelet main sync loop" May 13 00:09:00.894197 kubelet[2452]: E0513 00:09:00.894127 2452 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 13 00:09:00.920691 kubelet[2452]: I0513 00:09:00.920657 2452 cpu_manager.go:214] "Starting CPU manager" policy="none" May 13 00:09:00.920873 kubelet[2452]: I0513 00:09:00.920856 2452 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 13 00:09:00.920932 kubelet[2452]: I0513 00:09:00.920923 2452 state_mem.go:36] "Initialized new in-memory state store" May 13 00:09:00.921143 kubelet[2452]: I0513 00:09:00.921125 2452 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 13 00:09:00.921237 kubelet[2452]: I0513 00:09:00.921210 2452 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 13 00:09:00.921322 kubelet[2452]: I0513 00:09:00.921311 2452 policy_none.go:49] "None policy: Start" May 13 00:09:00.921977 kubelet[2452]: I0513 00:09:00.921954 2452 memory_manager.go:170] "Starting memorymanager" policy="None" May 13 00:09:00.921977 kubelet[2452]: I0513 00:09:00.921982 2452 state_mem.go:35] "Initializing new in-memory state store" May 13 00:09:00.922182 kubelet[2452]: I0513 00:09:00.922129 2452 state_mem.go:75] "Updated machine memory state" May 13 00:09:00.926331 kubelet[2452]: I0513 00:09:00.926307 2452 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 13 00:09:00.926848 kubelet[2452]: I0513 00:09:00.926568 2452 eviction_manager.go:189] "Eviction manager: starting control loop" May 13 00:09:00.926848 kubelet[2452]: I0513 00:09:00.926585 2452 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 13 00:09:00.926848 kubelet[2452]: I0513 00:09:00.926762 2452 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 13 00:09:01.001190 kubelet[2452]: E0513 00:09:01.001072 2452 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" May 13 00:09:01.031132 kubelet[2452]: I0513 00:09:01.030754 2452 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 13 00:09:01.037676 kubelet[2452]: I0513 00:09:01.037635 2452 kubelet_node_status.go:111] "Node was previously registered" node="localhost" May 13 00:09:01.037792 kubelet[2452]: I0513 00:09:01.037725 2452 kubelet_node_status.go:75] "Successfully registered node" node="localhost" May 13 00:09:01.080722 kubelet[2452]: I0513 00:09:01.080601 2452 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d7d116320686b8d09e8c465a51e62ca3-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"d7d116320686b8d09e8c465a51e62ca3\") " pod="kube-system/kube-apiserver-localhost" May 13 00:09:01.080722 kubelet[2452]: I0513 00:09:01.080648 2452 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d7d116320686b8d09e8c465a51e62ca3-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"d7d116320686b8d09e8c465a51e62ca3\") " pod="kube-system/kube-apiserver-localhost" May 13 00:09:01.080722 kubelet[2452]: I0513 00:09:01.080674 2452 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d7d116320686b8d09e8c465a51e62ca3-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"d7d116320686b8d09e8c465a51e62ca3\") " pod="kube-system/kube-apiserver-localhost" May 13 00:09:01.080722 kubelet[2452]: I0513 00:09:01.080691 2452 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:09:01.080722 kubelet[2452]: I0513 00:09:01.080708 2452 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:09:01.080945 kubelet[2452]: I0513 00:09:01.080721 2452 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:09:01.080945 kubelet[2452]: I0513 00:09:01.080737 2452 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:09:01.080945 kubelet[2452]: I0513 00:09:01.080752 2452 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:09:01.080945 kubelet[2452]: I0513 00:09:01.080767 2452 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0613557c150e4f35d1f3f822b5f32ff1-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0613557c150e4f35d1f3f822b5f32ff1\") " pod="kube-system/kube-scheduler-localhost" May 13 00:09:01.300849 kubelet[2452]: E0513 00:09:01.300736 2452 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:09:01.300958 kubelet[2452]: E0513 00:09:01.300943 2452 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:09:01.301869 kubelet[2452]: E0513 00:09:01.301790 2452 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:09:01.866446 kubelet[2452]: I0513 00:09:01.866417 2452 apiserver.go:52] "Watching apiserver" May 13 00:09:01.879468 kubelet[2452]: I0513 00:09:01.879408 2452 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" May 13 00:09:01.915408 kubelet[2452]: E0513 00:09:01.915357 2452 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" May 13 00:09:01.915537 kubelet[2452]: E0513 00:09:01.915412 2452 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" May 13 00:09:01.915537 kubelet[2452]: E0513 00:09:01.915531 2452 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:09:01.915612 kubelet[2452]: E0513 00:09:01.915592 2452 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:09:01.916797 kubelet[2452]: E0513 00:09:01.916566 2452 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 13 00:09:01.916797 kubelet[2452]: E0513 00:09:01.916716 2452 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:09:01.929931 kubelet[2452]: I0513 00:09:01.929862 2452 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.929846375 podStartE2EDuration="1.929846375s" podCreationTimestamp="2025-05-13 00:09:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 00:09:01.929822866 +0000 UTC m=+1.119197466" watchObservedRunningTime="2025-05-13 00:09:01.929846375 +0000 UTC m=+1.119220975" May 13 00:09:01.938198 kubelet[2452]: I0513 00:09:01.938136 2452 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.938120332 podStartE2EDuration="1.938120332s" podCreationTimestamp="2025-05-13 00:09:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 00:09:01.937903279 +0000 UTC m=+1.127277879" watchObservedRunningTime="2025-05-13 00:09:01.938120332 +0000 UTC m=+1.127494932" May 13 00:09:01.956831 kubelet[2452]: I0513 00:09:01.956760 2452 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.956742984 podStartE2EDuration="1.956742984s" podCreationTimestamp="2025-05-13 00:09:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 00:09:01.948242018 +0000 UTC m=+1.137616578" watchObservedRunningTime="2025-05-13 00:09:01.956742984 +0000 UTC m=+1.146117584" May 13 00:09:02.905206 kubelet[2452]: E0513 00:09:02.905002 2452 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:09:02.905206 kubelet[2452]: E0513 00:09:02.905094 2452 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:09:02.908390 kubelet[2452]: E0513 00:09:02.906082 2452 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:09:04.728221 kubelet[2452]: E0513 00:09:04.728142 2452 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:09:05.560846 sudo[1614]: pam_unix(sudo:session): session closed for user root May 13 00:09:05.565032 sshd[1611]: pam_unix(sshd:session): session closed for user core May 13 00:09:05.569200 systemd[1]: sshd@6-10.0.0.16:22-10.0.0.1:33890.service: Deactivated successfully. May 13 00:09:05.570774 systemd[1]: session-7.scope: Deactivated successfully. May 13 00:09:05.570930 systemd[1]: session-7.scope: Consumed 7.307s CPU time, 151.3M memory peak, 0B memory swap peak. May 13 00:09:05.572069 systemd-logind[1419]: Session 7 logged out. Waiting for processes to exit. May 13 00:09:05.573227 systemd-logind[1419]: Removed session 7. May 13 00:09:05.966106 kubelet[2452]: I0513 00:09:05.966078 2452 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 13 00:09:05.967547 kubelet[2452]: I0513 00:09:05.966779 2452 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 13 00:09:05.967579 containerd[1437]: time="2025-05-13T00:09:05.966557945Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 13 00:09:06.686126 systemd[1]: Created slice kubepods-besteffort-pod9c44a4ed_6fce_484f_9aad_250fd8a16a2b.slice - libcontainer container kubepods-besteffort-pod9c44a4ed_6fce_484f_9aad_250fd8a16a2b.slice. May 13 00:09:06.717619 kubelet[2452]: I0513 00:09:06.717586 2452 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9c44a4ed-6fce-484f-9aad-250fd8a16a2b-lib-modules\") pod \"kube-proxy-r4j4p\" (UID: \"9c44a4ed-6fce-484f-9aad-250fd8a16a2b\") " pod="kube-system/kube-proxy-r4j4p" May 13 00:09:06.717619 kubelet[2452]: I0513 00:09:06.717624 2452 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/9c44a4ed-6fce-484f-9aad-250fd8a16a2b-kube-proxy\") pod \"kube-proxy-r4j4p\" (UID: \"9c44a4ed-6fce-484f-9aad-250fd8a16a2b\") " pod="kube-system/kube-proxy-r4j4p" May 13 00:09:06.717803 kubelet[2452]: I0513 00:09:06.717642 2452 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9c44a4ed-6fce-484f-9aad-250fd8a16a2b-xtables-lock\") pod \"kube-proxy-r4j4p\" (UID: \"9c44a4ed-6fce-484f-9aad-250fd8a16a2b\") " pod="kube-system/kube-proxy-r4j4p" May 13 00:09:06.717803 kubelet[2452]: I0513 00:09:06.717657 2452 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8lh6h\" (UniqueName: \"kubernetes.io/projected/9c44a4ed-6fce-484f-9aad-250fd8a16a2b-kube-api-access-8lh6h\") pod \"kube-proxy-r4j4p\" (UID: \"9c44a4ed-6fce-484f-9aad-250fd8a16a2b\") " pod="kube-system/kube-proxy-r4j4p" May 13 00:09:06.998132 kubelet[2452]: E0513 00:09:06.998099 2452 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:09:07.001441 containerd[1437]: time="2025-05-13T00:09:07.001368560Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-r4j4p,Uid:9c44a4ed-6fce-484f-9aad-250fd8a16a2b,Namespace:kube-system,Attempt:0,}" May 13 00:09:07.034561 containerd[1437]: time="2025-05-13T00:09:07.032629544Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:09:07.034561 containerd[1437]: time="2025-05-13T00:09:07.032719947Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:09:07.034561 containerd[1437]: time="2025-05-13T00:09:07.032735067Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:09:07.034561 containerd[1437]: time="2025-05-13T00:09:07.034504636Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:09:07.058446 systemd[1]: Started cri-containerd-3a0d27c9c6c171fb6e49da6935f8d9f3dea36d341a2353f7cdc99cc5c7555dc6.scope - libcontainer container 3a0d27c9c6c171fb6e49da6935f8d9f3dea36d341a2353f7cdc99cc5c7555dc6. May 13 00:09:07.064462 systemd[1]: Created slice kubepods-besteffort-podf848c86b_7639_429f_a714_b339186731de.slice - libcontainer container kubepods-besteffort-podf848c86b_7639_429f_a714_b339186731de.slice. May 13 00:09:07.080237 containerd[1437]: time="2025-05-13T00:09:07.080053936Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-r4j4p,Uid:9c44a4ed-6fce-484f-9aad-250fd8a16a2b,Namespace:kube-system,Attempt:0,} returns sandbox id \"3a0d27c9c6c171fb6e49da6935f8d9f3dea36d341a2353f7cdc99cc5c7555dc6\"" May 13 00:09:07.080988 kubelet[2452]: E0513 00:09:07.080964 2452 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:09:07.084311 containerd[1437]: time="2025-05-13T00:09:07.084279532Z" level=info msg="CreateContainer within sandbox \"3a0d27c9c6c171fb6e49da6935f8d9f3dea36d341a2353f7cdc99cc5c7555dc6\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 13 00:09:07.097071 containerd[1437]: time="2025-05-13T00:09:07.097015245Z" level=info msg="CreateContainer within sandbox \"3a0d27c9c6c171fb6e49da6935f8d9f3dea36d341a2353f7cdc99cc5c7555dc6\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"2e774fbee258d8953b6f275d58897f77e1e763ea55c4e6586d98096f83718183\"" May 13 00:09:07.098412 containerd[1437]: time="2025-05-13T00:09:07.098373282Z" level=info msg="StartContainer for \"2e774fbee258d8953b6f275d58897f77e1e763ea55c4e6586d98096f83718183\"" May 13 00:09:07.119443 kubelet[2452]: I0513 00:09:07.119406 2452 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h64fx\" (UniqueName: \"kubernetes.io/projected/f848c86b-7639-429f-a714-b339186731de-kube-api-access-h64fx\") pod \"tigera-operator-6f6897fdc5-8tbgq\" (UID: \"f848c86b-7639-429f-a714-b339186731de\") " pod="tigera-operator/tigera-operator-6f6897fdc5-8tbgq" May 13 00:09:07.119443 kubelet[2452]: I0513 00:09:07.119451 2452 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/f848c86b-7639-429f-a714-b339186731de-var-lib-calico\") pod \"tigera-operator-6f6897fdc5-8tbgq\" (UID: \"f848c86b-7639-429f-a714-b339186731de\") " pod="tigera-operator/tigera-operator-6f6897fdc5-8tbgq" May 13 00:09:07.124471 systemd[1]: Started cri-containerd-2e774fbee258d8953b6f275d58897f77e1e763ea55c4e6586d98096f83718183.scope - libcontainer container 2e774fbee258d8953b6f275d58897f77e1e763ea55c4e6586d98096f83718183. May 13 00:09:07.152551 containerd[1437]: time="2025-05-13T00:09:07.152508059Z" level=info msg="StartContainer for \"2e774fbee258d8953b6f275d58897f77e1e763ea55c4e6586d98096f83718183\" returns successfully" May 13 00:09:07.368325 containerd[1437]: time="2025-05-13T00:09:07.368200222Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6f6897fdc5-8tbgq,Uid:f848c86b-7639-429f-a714-b339186731de,Namespace:tigera-operator,Attempt:0,}" May 13 00:09:07.387564 containerd[1437]: time="2025-05-13T00:09:07.387471915Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:09:07.387564 containerd[1437]: time="2025-05-13T00:09:07.387528397Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:09:07.387564 containerd[1437]: time="2025-05-13T00:09:07.387544397Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:09:07.388318 containerd[1437]: time="2025-05-13T00:09:07.388229656Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:09:07.403427 systemd[1]: Started cri-containerd-9dd25d3a345c33b45962a5d5fcd06a8109a26b13e15f0cbfd655911e10508d03.scope - libcontainer container 9dd25d3a345c33b45962a5d5fcd06a8109a26b13e15f0cbfd655911e10508d03. May 13 00:09:07.429030 containerd[1437]: time="2025-05-13T00:09:07.428958742Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6f6897fdc5-8tbgq,Uid:f848c86b-7639-429f-a714-b339186731de,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"9dd25d3a345c33b45962a5d5fcd06a8109a26b13e15f0cbfd655911e10508d03\"" May 13 00:09:07.430707 containerd[1437]: time="2025-05-13T00:09:07.430684230Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\"" May 13 00:09:07.919641 kubelet[2452]: E0513 00:09:07.919609 2452 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:09:08.667621 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount584767036.mount: Deactivated successfully. May 13 00:09:09.082087 containerd[1437]: time="2025-05-13T00:09:09.082036107Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:09:09.082672 containerd[1437]: time="2025-05-13T00:09:09.082638242Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.7: active requests=0, bytes read=19323084" May 13 00:09:09.083444 containerd[1437]: time="2025-05-13T00:09:09.083419662Z" level=info msg="ImageCreate event name:\"sha256:27f7c2cfac802523e44ecd16453a4cc992f6c7d610c13054f2715a7cb4370565\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:09:09.085562 containerd[1437]: time="2025-05-13T00:09:09.085510393Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:09:09.086625 containerd[1437]: time="2025-05-13T00:09:09.086585660Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.7\" with image id \"sha256:27f7c2cfac802523e44ecd16453a4cc992f6c7d610c13054f2715a7cb4370565\", repo tag \"quay.io/tigera/operator:v1.36.7\", repo digest \"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\", size \"19319079\" in 1.655865509s" May 13 00:09:09.086667 containerd[1437]: time="2025-05-13T00:09:09.086625781Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\" returns image reference \"sha256:27f7c2cfac802523e44ecd16453a4cc992f6c7d610c13054f2715a7cb4370565\"" May 13 00:09:09.092362 containerd[1437]: time="2025-05-13T00:09:09.092320522Z" level=info msg="CreateContainer within sandbox \"9dd25d3a345c33b45962a5d5fcd06a8109a26b13e15f0cbfd655911e10508d03\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" May 13 00:09:09.099951 containerd[1437]: time="2025-05-13T00:09:09.099901550Z" level=info msg="CreateContainer within sandbox \"9dd25d3a345c33b45962a5d5fcd06a8109a26b13e15f0cbfd655911e10508d03\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"303ce9175c382b2757b2d208e8c989b5ae51b37f9b26174a835104c6e7d3559a\"" May 13 00:09:09.100366 containerd[1437]: time="2025-05-13T00:09:09.100340161Z" level=info msg="StartContainer for \"303ce9175c382b2757b2d208e8c989b5ae51b37f9b26174a835104c6e7d3559a\"" May 13 00:09:09.129448 systemd[1]: Started cri-containerd-303ce9175c382b2757b2d208e8c989b5ae51b37f9b26174a835104c6e7d3559a.scope - libcontainer container 303ce9175c382b2757b2d208e8c989b5ae51b37f9b26174a835104c6e7d3559a. May 13 00:09:09.152362 containerd[1437]: time="2025-05-13T00:09:09.152310410Z" level=info msg="StartContainer for \"303ce9175c382b2757b2d208e8c989b5ae51b37f9b26174a835104c6e7d3559a\" returns successfully" May 13 00:09:09.953841 kubelet[2452]: I0513 00:09:09.953732 2452 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-r4j4p" podStartSLOduration=3.953643837 podStartE2EDuration="3.953643837s" podCreationTimestamp="2025-05-13 00:09:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 00:09:07.926371175 +0000 UTC m=+7.115745775" watchObservedRunningTime="2025-05-13 00:09:09.953643837 +0000 UTC m=+9.143018437" May 13 00:09:11.376916 kubelet[2452]: E0513 00:09:11.376871 2452 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:09:11.386051 kubelet[2452]: I0513 00:09:11.385942 2452 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6f6897fdc5-8tbgq" podStartSLOduration=2.725833679 podStartE2EDuration="4.385924455s" podCreationTimestamp="2025-05-13 00:09:07 +0000 UTC" firstStartedPulling="2025-05-13 00:09:07.430008571 +0000 UTC m=+6.619383171" lastFinishedPulling="2025-05-13 00:09:09.090099347 +0000 UTC m=+8.279473947" observedRunningTime="2025-05-13 00:09:09.955438082 +0000 UTC m=+9.144812682" watchObservedRunningTime="2025-05-13 00:09:11.385924455 +0000 UTC m=+10.575299095" May 13 00:09:11.428698 kubelet[2452]: E0513 00:09:11.428637 2452 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:09:13.618228 systemd[1]: Created slice kubepods-besteffort-pod39ec32a7_0add_4ca2_9b5a_848ca7a4de95.slice - libcontainer container kubepods-besteffort-pod39ec32a7_0add_4ca2_9b5a_848ca7a4de95.slice. May 13 00:09:13.661794 kubelet[2452]: I0513 00:09:13.661748 2452 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/d0db7374-78f3-4709-89db-3025b6c4b184-policysync\") pod \"calico-node-267x8\" (UID: \"d0db7374-78f3-4709-89db-3025b6c4b184\") " pod="calico-system/calico-node-267x8" May 13 00:09:13.661794 kubelet[2452]: I0513 00:09:13.661789 2452 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/d0db7374-78f3-4709-89db-3025b6c4b184-var-lib-calico\") pod \"calico-node-267x8\" (UID: \"d0db7374-78f3-4709-89db-3025b6c4b184\") " pod="calico-system/calico-node-267x8" May 13 00:09:13.662622 kubelet[2452]: I0513 00:09:13.661808 2452 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/d0db7374-78f3-4709-89db-3025b6c4b184-cni-bin-dir\") pod \"calico-node-267x8\" (UID: \"d0db7374-78f3-4709-89db-3025b6c4b184\") " pod="calico-system/calico-node-267x8" May 13 00:09:13.662622 kubelet[2452]: I0513 00:09:13.661826 2452 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/39ec32a7-0add-4ca2-9b5a-848ca7a4de95-typha-certs\") pod \"calico-typha-575b4c57d9-ptwks\" (UID: \"39ec32a7-0add-4ca2-9b5a-848ca7a4de95\") " pod="calico-system/calico-typha-575b4c57d9-ptwks" May 13 00:09:13.662622 kubelet[2452]: I0513 00:09:13.661841 2452 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d0db7374-78f3-4709-89db-3025b6c4b184-lib-modules\") pod \"calico-node-267x8\" (UID: \"d0db7374-78f3-4709-89db-3025b6c4b184\") " pod="calico-system/calico-node-267x8" May 13 00:09:13.662622 kubelet[2452]: I0513 00:09:13.661870 2452 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d0db7374-78f3-4709-89db-3025b6c4b184-xtables-lock\") pod \"calico-node-267x8\" (UID: \"d0db7374-78f3-4709-89db-3025b6c4b184\") " pod="calico-system/calico-node-267x8" May 13 00:09:13.662622 kubelet[2452]: I0513 00:09:13.661888 2452 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bscl4\" (UniqueName: \"kubernetes.io/projected/d0db7374-78f3-4709-89db-3025b6c4b184-kube-api-access-bscl4\") pod \"calico-node-267x8\" (UID: \"d0db7374-78f3-4709-89db-3025b6c4b184\") " pod="calico-system/calico-node-267x8" May 13 00:09:13.662469 systemd[1]: Created slice kubepods-besteffort-podd0db7374_78f3_4709_89db_3025b6c4b184.slice - libcontainer container kubepods-besteffort-podd0db7374_78f3_4709_89db_3025b6c4b184.slice. May 13 00:09:13.662840 kubelet[2452]: I0513 00:09:13.661915 2452 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/39ec32a7-0add-4ca2-9b5a-848ca7a4de95-tigera-ca-bundle\") pod \"calico-typha-575b4c57d9-ptwks\" (UID: \"39ec32a7-0add-4ca2-9b5a-848ca7a4de95\") " pod="calico-system/calico-typha-575b4c57d9-ptwks" May 13 00:09:13.662840 kubelet[2452]: I0513 00:09:13.661933 2452 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-br7kj\" (UniqueName: \"kubernetes.io/projected/39ec32a7-0add-4ca2-9b5a-848ca7a4de95-kube-api-access-br7kj\") pod \"calico-typha-575b4c57d9-ptwks\" (UID: \"39ec32a7-0add-4ca2-9b5a-848ca7a4de95\") " pod="calico-system/calico-typha-575b4c57d9-ptwks" May 13 00:09:13.662840 kubelet[2452]: I0513 00:09:13.661948 2452 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/d0db7374-78f3-4709-89db-3025b6c4b184-node-certs\") pod \"calico-node-267x8\" (UID: \"d0db7374-78f3-4709-89db-3025b6c4b184\") " pod="calico-system/calico-node-267x8" May 13 00:09:13.662840 kubelet[2452]: I0513 00:09:13.661965 2452 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d0db7374-78f3-4709-89db-3025b6c4b184-tigera-ca-bundle\") pod \"calico-node-267x8\" (UID: \"d0db7374-78f3-4709-89db-3025b6c4b184\") " pod="calico-system/calico-node-267x8" May 13 00:09:13.662840 kubelet[2452]: I0513 00:09:13.661981 2452 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/d0db7374-78f3-4709-89db-3025b6c4b184-cni-log-dir\") pod \"calico-node-267x8\" (UID: \"d0db7374-78f3-4709-89db-3025b6c4b184\") " pod="calico-system/calico-node-267x8" May 13 00:09:13.663006 kubelet[2452]: I0513 00:09:13.662000 2452 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/d0db7374-78f3-4709-89db-3025b6c4b184-var-run-calico\") pod \"calico-node-267x8\" (UID: \"d0db7374-78f3-4709-89db-3025b6c4b184\") " pod="calico-system/calico-node-267x8" May 13 00:09:13.663006 kubelet[2452]: I0513 00:09:13.662026 2452 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/d0db7374-78f3-4709-89db-3025b6c4b184-cni-net-dir\") pod \"calico-node-267x8\" (UID: \"d0db7374-78f3-4709-89db-3025b6c4b184\") " pod="calico-system/calico-node-267x8" May 13 00:09:13.663006 kubelet[2452]: I0513 00:09:13.662047 2452 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/d0db7374-78f3-4709-89db-3025b6c4b184-flexvol-driver-host\") pod \"calico-node-267x8\" (UID: \"d0db7374-78f3-4709-89db-3025b6c4b184\") " pod="calico-system/calico-node-267x8" May 13 00:09:13.763796 kubelet[2452]: E0513 00:09:13.763764 2452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:09:13.763796 kubelet[2452]: W0513 00:09:13.763787 2452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:09:13.763959 kubelet[2452]: E0513 00:09:13.763810 2452 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:09:13.764507 kubelet[2452]: E0513 00:09:13.763992 2452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:09:13.764573 kubelet[2452]: W0513 00:09:13.764509 2452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:09:13.764573 kubelet[2452]: E0513 00:09:13.764530 2452 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:09:13.764716 kubelet[2452]: E0513 00:09:13.764702 2452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:09:13.764716 kubelet[2452]: W0513 00:09:13.764714 2452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:09:13.764792 kubelet[2452]: E0513 00:09:13.764777 2452 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:09:13.764957 kubelet[2452]: E0513 00:09:13.764942 2452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:09:13.764957 kubelet[2452]: W0513 00:09:13.764952 2452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:09:13.765053 kubelet[2452]: E0513 00:09:13.765033 2452 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:09:13.765925 kubelet[2452]: E0513 00:09:13.765631 2452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:09:13.765925 kubelet[2452]: W0513 00:09:13.765644 2452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:09:13.765925 kubelet[2452]: E0513 00:09:13.765792 2452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:09:13.765925 kubelet[2452]: E0513 00:09:13.765793 2452 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:09:13.765925 kubelet[2452]: W0513 00:09:13.765799 2452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:09:13.765925 kubelet[2452]: E0513 00:09:13.765853 2452 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:09:13.766751 kubelet[2452]: E0513 00:09:13.766730 2452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:09:13.766751 kubelet[2452]: W0513 00:09:13.766748 2452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:09:13.766831 kubelet[2452]: E0513 00:09:13.766789 2452 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:09:13.768531 kubelet[2452]: E0513 00:09:13.768350 2452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:09:13.768531 kubelet[2452]: W0513 00:09:13.768367 2452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:09:13.769506 kubelet[2452]: E0513 00:09:13.769432 2452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:09:13.769564 kubelet[2452]: W0513 00:09:13.769505 2452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:09:13.769721 kubelet[2452]: E0513 00:09:13.769705 2452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:09:13.769721 kubelet[2452]: W0513 00:09:13.769720 2452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:09:13.770069 kubelet[2452]: E0513 00:09:13.770048 2452 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:09:13.770158 kubelet[2452]: E0513 00:09:13.770088 2452 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:09:13.770158 kubelet[2452]: E0513 00:09:13.770104 2452 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:09:13.770874 kubelet[2452]: E0513 00:09:13.770843 2452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:09:13.770957 kubelet[2452]: W0513 00:09:13.770866 2452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:09:13.770998 kubelet[2452]: E0513 00:09:13.770978 2452 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:09:13.771633 kubelet[2452]: E0513 00:09:13.771592 2452 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bztmm" podUID="f17de286-e9a7-4976-97e8-a35d9794c721" May 13 00:09:13.772041 kubelet[2452]: E0513 00:09:13.772023 2452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:09:13.772041 kubelet[2452]: W0513 00:09:13.772040 2452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:09:13.772112 kubelet[2452]: E0513 00:09:13.772068 2452 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:09:13.772983 kubelet[2452]: E0513 00:09:13.772945 2452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:09:13.772983 kubelet[2452]: W0513 00:09:13.772964 2452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:09:13.773191 kubelet[2452]: E0513 00:09:13.773017 2452 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:09:13.773306 kubelet[2452]: E0513 00:09:13.773257 2452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:09:13.773306 kubelet[2452]: W0513 00:09:13.773301 2452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:09:13.773513 kubelet[2452]: E0513 00:09:13.773494 2452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:09:13.773513 kubelet[2452]: W0513 00:09:13.773507 2452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:09:13.774041 kubelet[2452]: E0513 00:09:13.773985 2452 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:09:13.774138 kubelet[2452]: E0513 00:09:13.774018 2452 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:09:13.774832 kubelet[2452]: E0513 00:09:13.774787 2452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:09:13.775127 kubelet[2452]: W0513 00:09:13.775098 2452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:09:13.775542 kubelet[2452]: E0513 00:09:13.775334 2452 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:09:13.777909 kubelet[2452]: E0513 00:09:13.777470 2452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:09:13.777909 kubelet[2452]: W0513 00:09:13.777494 2452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:09:13.777909 kubelet[2452]: E0513 00:09:13.777543 2452 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:09:13.778618 kubelet[2452]: E0513 00:09:13.778251 2452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:09:13.779356 kubelet[2452]: W0513 00:09:13.779331 2452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:09:13.779642 kubelet[2452]: E0513 00:09:13.779614 2452 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:09:13.780191 kubelet[2452]: E0513 00:09:13.779912 2452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:09:13.780191 kubelet[2452]: W0513 00:09:13.779930 2452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:09:13.780191 kubelet[2452]: E0513 00:09:13.779965 2452 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:09:13.781285 kubelet[2452]: E0513 00:09:13.780875 2452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:09:13.782347 kubelet[2452]: W0513 00:09:13.782172 2452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:09:13.782406 kubelet[2452]: E0513 00:09:13.782387 2452 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:09:13.782848 kubelet[2452]: E0513 00:09:13.782532 2452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:09:13.782848 kubelet[2452]: W0513 00:09:13.782548 2452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:09:13.782848 kubelet[2452]: E0513 00:09:13.782625 2452 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:09:13.783864 kubelet[2452]: E0513 00:09:13.783074 2452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:09:13.783864 kubelet[2452]: W0513 00:09:13.783094 2452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:09:13.783864 kubelet[2452]: E0513 00:09:13.783137 2452 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:09:13.783864 kubelet[2452]: E0513 00:09:13.783357 2452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:09:13.783864 kubelet[2452]: W0513 00:09:13.783390 2452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:09:13.783864 kubelet[2452]: E0513 00:09:13.783447 2452 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:09:13.784068 kubelet[2452]: E0513 00:09:13.784008 2452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:09:13.784068 kubelet[2452]: W0513 00:09:13.784023 2452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:09:13.784121 kubelet[2452]: E0513 00:09:13.784074 2452 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:09:13.785118 kubelet[2452]: E0513 00:09:13.784599 2452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:09:13.785118 kubelet[2452]: W0513 00:09:13.784619 2452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:09:13.786086 kubelet[2452]: E0513 00:09:13.785299 2452 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:09:13.786086 kubelet[2452]: E0513 00:09:13.785353 2452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:09:13.786086 kubelet[2452]: W0513 00:09:13.785392 2452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:09:13.786086 kubelet[2452]: E0513 00:09:13.785429 2452 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:09:13.786086 kubelet[2452]: E0513 00:09:13.785824 2452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:09:13.786086 kubelet[2452]: W0513 00:09:13.785858 2452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:09:13.786086 kubelet[2452]: E0513 00:09:13.785897 2452 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:09:13.787023 kubelet[2452]: E0513 00:09:13.786245 2452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:09:13.787023 kubelet[2452]: W0513 00:09:13.786259 2452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:09:13.787750 kubelet[2452]: E0513 00:09:13.787621 2452 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:09:13.788153 kubelet[2452]: E0513 00:09:13.787886 2452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:09:13.788153 kubelet[2452]: W0513 00:09:13.787933 2452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:09:13.788153 kubelet[2452]: E0513 00:09:13.788103 2452 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:09:13.788323 kubelet[2452]: E0513 00:09:13.788299 2452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:09:13.788323 kubelet[2452]: W0513 00:09:13.788316 2452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:09:13.792386 kubelet[2452]: E0513 00:09:13.792348 2452 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:09:13.794285 kubelet[2452]: E0513 00:09:13.794235 2452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:09:13.794285 kubelet[2452]: W0513 00:09:13.794253 2452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:09:13.794758 kubelet[2452]: E0513 00:09:13.794691 2452 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:09:13.796648 kubelet[2452]: E0513 00:09:13.796623 2452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:09:13.796648 kubelet[2452]: W0513 00:09:13.796642 2452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:09:13.796853 kubelet[2452]: E0513 00:09:13.796833 2452 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:09:13.797434 kubelet[2452]: E0513 00:09:13.797404 2452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:09:13.797434 kubelet[2452]: W0513 00:09:13.797421 2452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:09:13.797715 kubelet[2452]: E0513 00:09:13.797480 2452 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:09:13.798011 kubelet[2452]: E0513 00:09:13.797778 2452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:09:13.798011 kubelet[2452]: W0513 00:09:13.797795 2452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:09:13.798011 kubelet[2452]: E0513 00:09:13.797958 2452 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:09:13.798328 kubelet[2452]: E0513 00:09:13.798114 2452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:09:13.798328 kubelet[2452]: W0513 00:09:13.798129 2452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:09:13.798328 kubelet[2452]: E0513 00:09:13.798190 2452 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:09:13.798427 kubelet[2452]: E0513 00:09:13.798417 2452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:09:13.798735 kubelet[2452]: W0513 00:09:13.798428 2452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:09:13.798735 kubelet[2452]: E0513 00:09:13.798584 2452 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:09:13.799033 kubelet[2452]: E0513 00:09:13.798813 2452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:09:13.799033 kubelet[2452]: W0513 00:09:13.798822 2452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:09:13.799033 kubelet[2452]: E0513 00:09:13.798896 2452 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:09:13.799033 kubelet[2452]: E0513 00:09:13.799025 2452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:09:13.799127 kubelet[2452]: W0513 00:09:13.799033 2452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:09:13.799127 kubelet[2452]: E0513 00:09:13.799090 2452 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:09:13.801295 kubelet[2452]: E0513 00:09:13.799228 2452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:09:13.801295 kubelet[2452]: W0513 00:09:13.799243 2452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:09:13.801295 kubelet[2452]: E0513 00:09:13.799335 2452 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:09:13.801295 kubelet[2452]: E0513 00:09:13.799459 2452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:09:13.801295 kubelet[2452]: W0513 00:09:13.799467 2452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:09:13.801295 kubelet[2452]: E0513 00:09:13.799571 2452 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:09:13.801295 kubelet[2452]: E0513 00:09:13.799782 2452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:09:13.801295 kubelet[2452]: W0513 00:09:13.799791 2452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:09:13.801295 kubelet[2452]: E0513 00:09:13.799802 2452 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:09:13.801295 kubelet[2452]: E0513 00:09:13.800248 2452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:09:13.801669 kubelet[2452]: W0513 00:09:13.800280 2452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:09:13.801669 kubelet[2452]: E0513 00:09:13.800303 2452 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:09:13.801669 kubelet[2452]: E0513 00:09:13.800529 2452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:09:13.801669 kubelet[2452]: W0513 00:09:13.800548 2452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:09:13.801669 kubelet[2452]: E0513 00:09:13.800574 2452 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:09:13.801669 kubelet[2452]: E0513 00:09:13.800817 2452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:09:13.801669 kubelet[2452]: W0513 00:09:13.800830 2452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:09:13.801669 kubelet[2452]: E0513 00:09:13.800869 2452 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:09:13.802605 kubelet[2452]: E0513 00:09:13.802301 2452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:09:13.802605 kubelet[2452]: W0513 00:09:13.802318 2452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:09:13.802605 kubelet[2452]: E0513 00:09:13.802331 2452 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:09:13.809861 kubelet[2452]: E0513 00:09:13.809793 2452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:09:13.810432 kubelet[2452]: W0513 00:09:13.810005 2452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:09:13.810432 kubelet[2452]: E0513 00:09:13.810038 2452 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:09:13.815893 kubelet[2452]: E0513 00:09:13.815824 2452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:09:13.815893 kubelet[2452]: W0513 00:09:13.815842 2452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:09:13.815893 kubelet[2452]: E0513 00:09:13.815860 2452 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:09:13.819845 kubelet[2452]: E0513 00:09:13.816943 2452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:09:13.819845 kubelet[2452]: W0513 00:09:13.816961 2452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:09:13.819845 kubelet[2452]: E0513 00:09:13.816976 2452 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:09:13.857302 kubelet[2452]: E0513 00:09:13.857273 2452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:09:13.857302 kubelet[2452]: W0513 00:09:13.857294 2452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:09:13.857302 kubelet[2452]: E0513 00:09:13.857313 2452 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:09:13.857864 kubelet[2452]: E0513 00:09:13.857772 2452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:09:13.857864 kubelet[2452]: W0513 00:09:13.857860 2452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:09:13.857993 kubelet[2452]: E0513 00:09:13.857876 2452 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:09:13.858608 kubelet[2452]: E0513 00:09:13.858199 2452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:09:13.858608 kubelet[2452]: W0513 00:09:13.858211 2452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:09:13.858608 kubelet[2452]: E0513 00:09:13.858222 2452 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:09:13.858899 kubelet[2452]: E0513 00:09:13.858752 2452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:09:13.858899 kubelet[2452]: W0513 00:09:13.858771 2452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:09:13.858899 kubelet[2452]: E0513 00:09:13.858784 2452 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:09:13.859124 kubelet[2452]: E0513 00:09:13.859102 2452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:09:13.859124 kubelet[2452]: W0513 00:09:13.859119 2452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:09:13.859173 kubelet[2452]: E0513 00:09:13.859130 2452 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:09:13.859342 kubelet[2452]: E0513 00:09:13.859316 2452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:09:13.859342 kubelet[2452]: W0513 00:09:13.859331 2452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:09:13.859342 kubelet[2452]: E0513 00:09:13.859342 2452 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:09:13.859563 kubelet[2452]: E0513 00:09:13.859534 2452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:09:13.859563 kubelet[2452]: W0513 00:09:13.859548 2452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:09:13.859563 kubelet[2452]: E0513 00:09:13.859557 2452 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:09:13.860278 kubelet[2452]: E0513 00:09:13.860233 2452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:09:13.860278 kubelet[2452]: W0513 00:09:13.860256 2452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:09:13.860278 kubelet[2452]: E0513 00:09:13.860279 2452 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:09:13.861031 kubelet[2452]: E0513 00:09:13.861009 2452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:09:13.861031 kubelet[2452]: W0513 00:09:13.861028 2452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:09:13.861114 kubelet[2452]: E0513 00:09:13.861040 2452 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:09:13.862306 kubelet[2452]: E0513 00:09:13.862287 2452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:09:13.862306 kubelet[2452]: W0513 00:09:13.862303 2452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:09:13.862550 kubelet[2452]: E0513 00:09:13.862315 2452 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:09:13.862550 kubelet[2452]: E0513 00:09:13.862511 2452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:09:13.862550 kubelet[2452]: W0513 00:09:13.862520 2452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:09:13.862550 kubelet[2452]: E0513 00:09:13.862530 2452 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:09:13.862818 kubelet[2452]: E0513 00:09:13.862729 2452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:09:13.862818 kubelet[2452]: W0513 00:09:13.862741 2452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:09:13.862818 kubelet[2452]: E0513 00:09:13.862751 2452 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:09:13.863070 kubelet[2452]: E0513 00:09:13.863055 2452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:09:13.863070 kubelet[2452]: W0513 00:09:13.863069 2452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:09:13.863131 kubelet[2452]: E0513 00:09:13.863078 2452 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:09:13.863799 kubelet[2452]: E0513 00:09:13.863779 2452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:09:13.863799 kubelet[2452]: W0513 00:09:13.863796 2452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:09:13.863910 kubelet[2452]: E0513 00:09:13.863808 2452 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:09:13.864407 kubelet[2452]: E0513 00:09:13.864380 2452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:09:13.864441 kubelet[2452]: W0513 00:09:13.864410 2452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:09:13.864441 kubelet[2452]: E0513 00:09:13.864425 2452 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:09:13.864668 kubelet[2452]: E0513 00:09:13.864651 2452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:09:13.864668 kubelet[2452]: W0513 00:09:13.864665 2452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:09:13.864747 kubelet[2452]: E0513 00:09:13.864673 2452 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:09:13.865188 kubelet[2452]: E0513 00:09:13.865093 2452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:09:13.865188 kubelet[2452]: W0513 00:09:13.865109 2452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:09:13.865188 kubelet[2452]: E0513 00:09:13.865122 2452 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:09:13.865369 kubelet[2452]: E0513 00:09:13.865357 2452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:09:13.865477 kubelet[2452]: W0513 00:09:13.865423 2452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:09:13.865477 kubelet[2452]: E0513 00:09:13.865439 2452 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:09:13.865768 kubelet[2452]: E0513 00:09:13.865706 2452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:09:13.865768 kubelet[2452]: W0513 00:09:13.865717 2452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:09:13.865768 kubelet[2452]: E0513 00:09:13.865727 2452 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:09:13.866055 kubelet[2452]: E0513 00:09:13.866042 2452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:09:13.866188 kubelet[2452]: W0513 00:09:13.866115 2452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:09:13.866188 kubelet[2452]: E0513 00:09:13.866133 2452 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:09:13.892439 kubelet[2452]: E0513 00:09:13.889865 2452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:09:13.892439 kubelet[2452]: W0513 00:09:13.892241 2452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:09:13.892439 kubelet[2452]: E0513 00:09:13.892297 2452 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:09:13.892439 kubelet[2452]: I0513 00:09:13.892333 2452 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/f17de286-e9a7-4976-97e8-a35d9794c721-varrun\") pod \"csi-node-driver-bztmm\" (UID: \"f17de286-e9a7-4976-97e8-a35d9794c721\") " pod="calico-system/csi-node-driver-bztmm" May 13 00:09:13.896967 kubelet[2452]: E0513 00:09:13.896544 2452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:09:13.896967 kubelet[2452]: W0513 00:09:13.896570 2452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:09:13.896967 kubelet[2452]: E0513 00:09:13.896621 2452 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:09:13.896967 kubelet[2452]: I0513 00:09:13.896651 2452 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/f17de286-e9a7-4976-97e8-a35d9794c721-socket-dir\") pod \"csi-node-driver-bztmm\" (UID: \"f17de286-e9a7-4976-97e8-a35d9794c721\") " pod="calico-system/csi-node-driver-bztmm" May 13 00:09:13.897437 kubelet[2452]: E0513 00:09:13.897420 2452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:09:13.897627 kubelet[2452]: W0513 00:09:13.897541 2452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:09:13.897808 kubelet[2452]: E0513 00:09:13.897746 2452 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:09:13.897808 kubelet[2452]: I0513 00:09:13.897804 2452 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wpsj9\" (UniqueName: \"kubernetes.io/projected/f17de286-e9a7-4976-97e8-a35d9794c721-kube-api-access-wpsj9\") pod \"csi-node-driver-bztmm\" (UID: \"f17de286-e9a7-4976-97e8-a35d9794c721\") " pod="calico-system/csi-node-driver-bztmm" May 13 00:09:13.898424 kubelet[2452]: E0513 00:09:13.898255 2452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:09:13.898424 kubelet[2452]: W0513 00:09:13.898284 2452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:09:13.898424 kubelet[2452]: E0513 00:09:13.898322 2452 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:09:13.902259 kubelet[2452]: E0513 00:09:13.902182 2452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:09:13.902259 kubelet[2452]: W0513 00:09:13.902201 2452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:09:13.902672 kubelet[2452]: E0513 00:09:13.902611 2452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:09:13.902672 kubelet[2452]: W0513 00:09:13.902625 2452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:09:13.902738 kubelet[2452]: E0513 00:09:13.902670 2452 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:09:13.902738 kubelet[2452]: I0513 00:09:13.902717 2452 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f17de286-e9a7-4976-97e8-a35d9794c721-kubelet-dir\") pod \"csi-node-driver-bztmm\" (UID: \"f17de286-e9a7-4976-97e8-a35d9794c721\") " pod="calico-system/csi-node-driver-bztmm" May 13 00:09:13.902794 kubelet[2452]: E0513 00:09:13.902738 2452 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:09:13.903098 kubelet[2452]: E0513 00:09:13.902994 2452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:09:13.903098 kubelet[2452]: W0513 00:09:13.903014 2452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:09:13.903098 kubelet[2452]: E0513 00:09:13.903053 2452 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:09:13.903248 kubelet[2452]: E0513 00:09:13.903237 2452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:09:13.903385 kubelet[2452]: W0513 00:09:13.903310 2452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:09:13.903385 kubelet[2452]: E0513 00:09:13.903326 2452 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:09:13.903562 kubelet[2452]: E0513 00:09:13.903545 2452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:09:13.903562 kubelet[2452]: W0513 00:09:13.903561 2452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:09:13.903663 kubelet[2452]: E0513 00:09:13.903579 2452 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:09:13.904376 kubelet[2452]: E0513 00:09:13.904353 2452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:09:13.904376 kubelet[2452]: W0513 00:09:13.904373 2452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:09:13.904515 kubelet[2452]: E0513 00:09:13.904391 2452 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:09:13.904599 kubelet[2452]: E0513 00:09:13.904586 2452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:09:13.904599 kubelet[2452]: W0513 00:09:13.904598 2452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:09:13.904673 kubelet[2452]: E0513 00:09:13.904609 2452 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:09:13.904827 kubelet[2452]: E0513 00:09:13.904808 2452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:09:13.904827 kubelet[2452]: W0513 00:09:13.904823 2452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:09:13.904901 kubelet[2452]: E0513 00:09:13.904834 2452 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:09:13.905538 kubelet[2452]: E0513 00:09:13.905497 2452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:09:13.905538 kubelet[2452]: W0513 00:09:13.905517 2452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:09:13.905538 kubelet[2452]: E0513 00:09:13.905533 2452 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:09:13.905730 kubelet[2452]: I0513 00:09:13.905557 2452 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/f17de286-e9a7-4976-97e8-a35d9794c721-registration-dir\") pod \"csi-node-driver-bztmm\" (UID: \"f17de286-e9a7-4976-97e8-a35d9794c721\") " pod="calico-system/csi-node-driver-bztmm" May 13 00:09:13.907415 kubelet[2452]: E0513 00:09:13.907384 2452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:09:13.907415 kubelet[2452]: W0513 00:09:13.907408 2452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:09:13.907502 kubelet[2452]: E0513 00:09:13.907425 2452 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:09:13.911521 kubelet[2452]: E0513 00:09:13.910489 2452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:09:13.911521 kubelet[2452]: W0513 00:09:13.910511 2452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:09:13.911521 kubelet[2452]: E0513 00:09:13.910531 2452 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:09:13.924138 kubelet[2452]: E0513 00:09:13.924012 2452 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:09:13.926491 containerd[1437]: time="2025-05-13T00:09:13.926432648Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-575b4c57d9-ptwks,Uid:39ec32a7-0add-4ca2-9b5a-848ca7a4de95,Namespace:calico-system,Attempt:0,}" May 13 00:09:13.962870 containerd[1437]: time="2025-05-13T00:09:13.962233687Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:09:13.963242 containerd[1437]: time="2025-05-13T00:09:13.962814179Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:09:13.963242 containerd[1437]: time="2025-05-13T00:09:13.963084464Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:09:13.963242 containerd[1437]: time="2025-05-13T00:09:13.963191426Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:09:13.967108 kubelet[2452]: E0513 00:09:13.966854 2452 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:09:13.968208 containerd[1437]: time="2025-05-13T00:09:13.968172046Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-267x8,Uid:d0db7374-78f3-4709-89db-3025b6c4b184,Namespace:calico-system,Attempt:0,}" May 13 00:09:13.986732 systemd[1]: Started cri-containerd-4b30e932d75e21538beeb427b1d973235c33a1296960361fd483c9789fcbb92a.scope - libcontainer container 4b30e932d75e21538beeb427b1d973235c33a1296960361fd483c9789fcbb92a. May 13 00:09:14.006746 kubelet[2452]: E0513 00:09:14.006705 2452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:09:14.006746 kubelet[2452]: W0513 00:09:14.006733 2452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:09:14.006746 kubelet[2452]: E0513 00:09:14.006755 2452 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:09:14.007034 kubelet[2452]: E0513 00:09:14.007004 2452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:09:14.007034 kubelet[2452]: W0513 00:09:14.007022 2452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:09:14.007034 kubelet[2452]: E0513 00:09:14.007036 2452 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:09:14.007945 kubelet[2452]: E0513 00:09:14.007905 2452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:09:14.007945 kubelet[2452]: W0513 00:09:14.007936 2452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:09:14.008488 kubelet[2452]: E0513 00:09:14.008358 2452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:09:14.008488 kubelet[2452]: W0513 00:09:14.008382 2452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:09:14.008488 kubelet[2452]: E0513 00:09:14.008402 2452 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:09:14.008687 kubelet[2452]: E0513 00:09:14.008659 2452 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:09:14.008746 kubelet[2452]: E0513 00:09:14.008662 2452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:09:14.008746 kubelet[2452]: W0513 00:09:14.008724 2452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:09:14.008872 kubelet[2452]: E0513 00:09:14.008808 2452 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:09:14.009036 kubelet[2452]: E0513 00:09:14.009021 2452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:09:14.009036 kubelet[2452]: W0513 00:09:14.009035 2452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:09:14.009223 kubelet[2452]: E0513 00:09:14.009096 2452 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:09:14.009279 kubelet[2452]: E0513 00:09:14.009242 2452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:09:14.009279 kubelet[2452]: W0513 00:09:14.009251 2452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:09:14.009422 kubelet[2452]: E0513 00:09:14.009400 2452 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:09:14.009653 kubelet[2452]: E0513 00:09:14.009639 2452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:09:14.009653 kubelet[2452]: W0513 00:09:14.009651 2452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:09:14.010198 kubelet[2452]: E0513 00:09:14.010167 2452 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:09:14.010576 kubelet[2452]: E0513 00:09:14.010484 2452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:09:14.010576 kubelet[2452]: W0513 00:09:14.010501 2452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:09:14.010576 kubelet[2452]: E0513 00:09:14.010548 2452 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:09:14.010837 kubelet[2452]: E0513 00:09:14.010822 2452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:09:14.010980 kubelet[2452]: W0513 00:09:14.010837 2452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:09:14.011070 kubelet[2452]: E0513 00:09:14.011033 2452 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:09:14.011364 kubelet[2452]: E0513 00:09:14.011343 2452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:09:14.011364 kubelet[2452]: W0513 00:09:14.011362 2452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:09:14.011472 kubelet[2452]: E0513 00:09:14.011448 2452 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:09:14.011752 kubelet[2452]: E0513 00:09:14.011737 2452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:09:14.011752 kubelet[2452]: W0513 00:09:14.011751 2452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:09:14.011948 kubelet[2452]: E0513 00:09:14.011832 2452 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:09:14.012056 kubelet[2452]: E0513 00:09:14.012036 2452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:09:14.012056 kubelet[2452]: W0513 00:09:14.012052 2452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:09:14.012391 kubelet[2452]: E0513 00:09:14.012151 2452 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:09:14.012476 kubelet[2452]: E0513 00:09:14.012456 2452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:09:14.012476 kubelet[2452]: W0513 00:09:14.012473 2452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:09:14.012617 kubelet[2452]: E0513 00:09:14.012511 2452 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:09:14.012755 kubelet[2452]: E0513 00:09:14.012740 2452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:09:14.012796 kubelet[2452]: W0513 00:09:14.012754 2452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:09:14.012962 kubelet[2452]: E0513 00:09:14.012945 2452 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:09:14.013209 kubelet[2452]: E0513 00:09:14.013163 2452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:09:14.013209 kubelet[2452]: W0513 00:09:14.013177 2452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:09:14.013440 kubelet[2452]: E0513 00:09:14.013315 2452 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:09:14.013899 kubelet[2452]: E0513 00:09:14.013539 2452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:09:14.013899 kubelet[2452]: W0513 00:09:14.013557 2452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:09:14.013899 kubelet[2452]: E0513 00:09:14.013705 2452 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:09:14.014086 kubelet[2452]: E0513 00:09:14.014054 2452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:09:14.014086 kubelet[2452]: W0513 00:09:14.014073 2452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:09:14.014182 kubelet[2452]: E0513 00:09:14.014163 2452 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:09:14.014425 kubelet[2452]: E0513 00:09:14.014389 2452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:09:14.014425 kubelet[2452]: W0513 00:09:14.014406 2452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:09:14.014543 kubelet[2452]: E0513 00:09:14.014524 2452 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:09:14.014868 kubelet[2452]: E0513 00:09:14.014834 2452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:09:14.014868 kubelet[2452]: W0513 00:09:14.014853 2452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:09:14.014868 kubelet[2452]: E0513 00:09:14.014868 2452 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:09:14.015194 kubelet[2452]: E0513 00:09:14.015133 2452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:09:14.015194 kubelet[2452]: W0513 00:09:14.015148 2452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:09:14.015194 kubelet[2452]: E0513 00:09:14.015159 2452 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:09:14.016719 kubelet[2452]: E0513 00:09:14.016692 2452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:09:14.016719 kubelet[2452]: W0513 00:09:14.016716 2452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:09:14.016992 kubelet[2452]: E0513 00:09:14.016734 2452 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:09:14.016992 kubelet[2452]: E0513 00:09:14.016972 2452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:09:14.016992 kubelet[2452]: W0513 00:09:14.016982 2452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:09:14.016992 kubelet[2452]: E0513 00:09:14.016991 2452 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:09:14.017802 kubelet[2452]: E0513 00:09:14.017784 2452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:09:14.017860 kubelet[2452]: W0513 00:09:14.017804 2452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:09:14.017860 kubelet[2452]: E0513 00:09:14.017819 2452 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:09:14.021051 kubelet[2452]: E0513 00:09:14.020975 2452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:09:14.021051 kubelet[2452]: W0513 00:09:14.020998 2452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:09:14.021051 kubelet[2452]: E0513 00:09:14.021014 2452 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:09:14.025689 containerd[1437]: time="2025-05-13T00:09:14.025644816Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-575b4c57d9-ptwks,Uid:39ec32a7-0add-4ca2-9b5a-848ca7a4de95,Namespace:calico-system,Attempt:0,} returns sandbox id \"4b30e932d75e21538beeb427b1d973235c33a1296960361fd483c9789fcbb92a\"" May 13 00:09:14.026576 kubelet[2452]: E0513 00:09:14.026407 2452 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:09:14.026576 kubelet[2452]: E0513 00:09:14.026490 2452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:09:14.026576 kubelet[2452]: W0513 00:09:14.026507 2452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:09:14.026576 kubelet[2452]: E0513 00:09:14.026523 2452 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:09:14.027226 containerd[1437]: time="2025-05-13T00:09:14.027183405Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\"" May 13 00:09:14.061030 containerd[1437]: time="2025-05-13T00:09:14.060817967Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:09:14.061030 containerd[1437]: time="2025-05-13T00:09:14.060880928Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:09:14.061030 containerd[1437]: time="2025-05-13T00:09:14.060893929Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:09:14.062861 containerd[1437]: time="2025-05-13T00:09:14.061032771Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:09:14.083457 systemd[1]: Started cri-containerd-0b57e1ea2ead34c25561c841837f5f37b675690b5966a14e3db744be170256a0.scope - libcontainer container 0b57e1ea2ead34c25561c841837f5f37b675690b5966a14e3db744be170256a0. May 13 00:09:14.104937 containerd[1437]: time="2025-05-13T00:09:14.104876048Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-267x8,Uid:d0db7374-78f3-4709-89db-3025b6c4b184,Namespace:calico-system,Attempt:0,} returns sandbox id \"0b57e1ea2ead34c25561c841837f5f37b675690b5966a14e3db744be170256a0\"" May 13 00:09:14.105755 kubelet[2452]: E0513 00:09:14.105716 2452 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:09:14.739226 kubelet[2452]: E0513 00:09:14.739152 2452 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:09:14.772311 kubelet[2452]: E0513 00:09:14.772257 2452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:09:14.772311 kubelet[2452]: W0513 00:09:14.772297 2452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:09:14.772311 kubelet[2452]: E0513 00:09:14.772315 2452 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:09:14.772544 kubelet[2452]: E0513 00:09:14.772512 2452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:09:14.772544 kubelet[2452]: W0513 00:09:14.772524 2452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:09:14.772544 kubelet[2452]: E0513 00:09:14.772533 2452 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:09:14.772706 kubelet[2452]: E0513 00:09:14.772682 2452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:09:14.772706 kubelet[2452]: W0513 00:09:14.772694 2452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:09:14.772706 kubelet[2452]: E0513 00:09:14.772704 2452 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:09:14.772872 kubelet[2452]: E0513 00:09:14.772845 2452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:09:14.772872 kubelet[2452]: W0513 00:09:14.772862 2452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:09:14.772872 kubelet[2452]: E0513 00:09:14.772870 2452 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:09:14.773042 kubelet[2452]: E0513 00:09:14.773029 2452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:09:14.773069 kubelet[2452]: W0513 00:09:14.773042 2452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:09:14.773069 kubelet[2452]: E0513 00:09:14.773052 2452 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:09:14.773204 kubelet[2452]: E0513 00:09:14.773193 2452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:09:14.773230 kubelet[2452]: W0513 00:09:14.773204 2452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:09:14.773230 kubelet[2452]: E0513 00:09:14.773214 2452 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:09:14.773392 kubelet[2452]: E0513 00:09:14.773363 2452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:09:14.773392 kubelet[2452]: W0513 00:09:14.773381 2452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:09:14.773392 kubelet[2452]: E0513 00:09:14.773391 2452 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:09:14.773552 kubelet[2452]: E0513 00:09:14.773538 2452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:09:14.773552 kubelet[2452]: W0513 00:09:14.773550 2452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:09:14.773604 kubelet[2452]: E0513 00:09:14.773559 2452 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:09:14.773760 kubelet[2452]: E0513 00:09:14.773746 2452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:09:14.773786 kubelet[2452]: W0513 00:09:14.773759 2452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:09:14.773786 kubelet[2452]: E0513 00:09:14.773770 2452 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:09:14.773964 kubelet[2452]: E0513 00:09:14.773951 2452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:09:14.773990 kubelet[2452]: W0513 00:09:14.773964 2452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:09:14.773990 kubelet[2452]: E0513 00:09:14.773975 2452 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:09:14.774163 kubelet[2452]: E0513 00:09:14.774146 2452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:09:14.774163 kubelet[2452]: W0513 00:09:14.774163 2452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:09:14.774219 kubelet[2452]: E0513 00:09:14.774171 2452 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:09:14.774469 kubelet[2452]: E0513 00:09:14.774416 2452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:09:14.774512 kubelet[2452]: W0513 00:09:14.774470 2452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:09:14.774512 kubelet[2452]: E0513 00:09:14.774483 2452 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:09:14.774775 kubelet[2452]: E0513 00:09:14.774760 2452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:09:14.774775 kubelet[2452]: W0513 00:09:14.774774 2452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:09:14.774830 kubelet[2452]: E0513 00:09:14.774793 2452 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:09:14.775024 kubelet[2452]: E0513 00:09:14.775008 2452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:09:14.775024 kubelet[2452]: W0513 00:09:14.775021 2452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:09:14.775102 kubelet[2452]: E0513 00:09:14.775029 2452 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:09:14.775257 kubelet[2452]: E0513 00:09:14.775212 2452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:09:14.775257 kubelet[2452]: W0513 00:09:14.775223 2452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:09:14.775257 kubelet[2452]: E0513 00:09:14.775232 2452 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:09:15.094323 update_engine[1430]: I20250513 00:09:15.094182 1430 update_attempter.cc:509] Updating boot flags... May 13 00:09:15.125851 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (3073) May 13 00:09:15.194480 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (3073) May 13 00:09:15.224336 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (3077) May 13 00:09:15.894781 kubelet[2452]: E0513 00:09:15.894731 2452 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bztmm" podUID="f17de286-e9a7-4976-97e8-a35d9794c721" May 13 00:09:16.560975 containerd[1437]: time="2025-05-13T00:09:16.560915148Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:09:16.561995 containerd[1437]: time="2025-05-13T00:09:16.561451557Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.3: active requests=0, bytes read=28370571" May 13 00:09:16.562475 containerd[1437]: time="2025-05-13T00:09:16.562424894Z" level=info msg="ImageCreate event name:\"sha256:26e730979a07ea7452715da6ac48076016018bc982c06ebd32d5e095f42d3d54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:09:16.564695 containerd[1437]: time="2025-05-13T00:09:16.564664413Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:09:16.565536 containerd[1437]: time="2025-05-13T00:09:16.565300744Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.3\" with image id \"sha256:26e730979a07ea7452715da6ac48076016018bc982c06ebd32d5e095f42d3d54\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\", size \"29739745\" in 2.538079978s" May 13 00:09:16.565536 containerd[1437]: time="2025-05-13T00:09:16.565334104Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\" returns image reference \"sha256:26e730979a07ea7452715da6ac48076016018bc982c06ebd32d5e095f42d3d54\"" May 13 00:09:16.566528 containerd[1437]: time="2025-05-13T00:09:16.566377442Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\"" May 13 00:09:16.576736 containerd[1437]: time="2025-05-13T00:09:16.576695380Z" level=info msg="CreateContainer within sandbox \"4b30e932d75e21538beeb427b1d973235c33a1296960361fd483c9789fcbb92a\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" May 13 00:09:16.590957 containerd[1437]: time="2025-05-13T00:09:16.590892186Z" level=info msg="CreateContainer within sandbox \"4b30e932d75e21538beeb427b1d973235c33a1296960361fd483c9789fcbb92a\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"828f12f6ef33a3b9999197b23f8be88bc2a922821bb6d4bc225985b1936e9908\"" May 13 00:09:16.591509 containerd[1437]: time="2025-05-13T00:09:16.591471796Z" level=info msg="StartContainer for \"828f12f6ef33a3b9999197b23f8be88bc2a922821bb6d4bc225985b1936e9908\"" May 13 00:09:16.619469 systemd[1]: Started cri-containerd-828f12f6ef33a3b9999197b23f8be88bc2a922821bb6d4bc225985b1936e9908.scope - libcontainer container 828f12f6ef33a3b9999197b23f8be88bc2a922821bb6d4bc225985b1936e9908. May 13 00:09:16.653506 containerd[1437]: time="2025-05-13T00:09:16.653463866Z" level=info msg="StartContainer for \"828f12f6ef33a3b9999197b23f8be88bc2a922821bb6d4bc225985b1936e9908\" returns successfully" May 13 00:09:16.972552 kubelet[2452]: E0513 00:09:16.972525 2452 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:09:16.983601 kubelet[2452]: I0513 00:09:16.983520 2452 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-575b4c57d9-ptwks" podStartSLOduration=1.44406068 podStartE2EDuration="3.983505242s" podCreationTimestamp="2025-05-13 00:09:13 +0000 UTC" firstStartedPulling="2025-05-13 00:09:14.026803358 +0000 UTC m=+13.216177958" lastFinishedPulling="2025-05-13 00:09:16.56624788 +0000 UTC m=+15.755622520" observedRunningTime="2025-05-13 00:09:16.983451601 +0000 UTC m=+16.172826201" watchObservedRunningTime="2025-05-13 00:09:16.983505242 +0000 UTC m=+16.172879842" May 13 00:09:16.991332 kubelet[2452]: E0513 00:09:16.991293 2452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:09:16.991332 kubelet[2452]: W0513 00:09:16.991320 2452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:09:16.991332 kubelet[2452]: E0513 00:09:16.991340 2452 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:09:16.991580 kubelet[2452]: E0513 00:09:16.991553 2452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:09:16.991580 kubelet[2452]: W0513 00:09:16.991570 2452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:09:16.991580 kubelet[2452]: E0513 00:09:16.991579 2452 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:09:16.991756 kubelet[2452]: E0513 00:09:16.991735 2452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:09:16.991756 kubelet[2452]: W0513 00:09:16.991747 2452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:09:16.991756 kubelet[2452]: E0513 00:09:16.991756 2452 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:09:16.991928 kubelet[2452]: E0513 00:09:16.991909 2452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:09:16.991928 kubelet[2452]: W0513 00:09:16.991921 2452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:09:16.991991 kubelet[2452]: E0513 00:09:16.991929 2452 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:09:16.992123 kubelet[2452]: E0513 00:09:16.992104 2452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:09:16.992152 kubelet[2452]: W0513 00:09:16.992117 2452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:09:16.992152 kubelet[2452]: E0513 00:09:16.992132 2452 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:09:16.992297 kubelet[2452]: E0513 00:09:16.992287 2452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:09:16.992332 kubelet[2452]: W0513 00:09:16.992298 2452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:09:16.992332 kubelet[2452]: E0513 00:09:16.992306 2452 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:09:16.992468 kubelet[2452]: E0513 00:09:16.992457 2452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:09:16.992468 kubelet[2452]: W0513 00:09:16.992466 2452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:09:16.992514 kubelet[2452]: E0513 00:09:16.992475 2452 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:09:16.992690 kubelet[2452]: E0513 00:09:16.992677 2452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:09:16.992690 kubelet[2452]: W0513 00:09:16.992687 2452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:09:16.992759 kubelet[2452]: E0513 00:09:16.992695 2452 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:09:16.992880 kubelet[2452]: E0513 00:09:16.992867 2452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:09:16.992910 kubelet[2452]: W0513 00:09:16.992880 2452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:09:16.992910 kubelet[2452]: E0513 00:09:16.992888 2452 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:09:16.993070 kubelet[2452]: E0513 00:09:16.993058 2452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:09:16.993070 kubelet[2452]: W0513 00:09:16.993069 2452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:09:16.993126 kubelet[2452]: E0513 00:09:16.993077 2452 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:09:16.993227 kubelet[2452]: E0513 00:09:16.993216 2452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:09:16.993255 kubelet[2452]: W0513 00:09:16.993227 2452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:09:16.993255 kubelet[2452]: E0513 00:09:16.993234 2452 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:09:16.993406 kubelet[2452]: E0513 00:09:16.993396 2452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:09:16.993406 kubelet[2452]: W0513 00:09:16.993406 2452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:09:16.993463 kubelet[2452]: E0513 00:09:16.993413 2452 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:09:16.993571 kubelet[2452]: E0513 00:09:16.993561 2452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:09:16.993596 kubelet[2452]: W0513 00:09:16.993571 2452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:09:16.993596 kubelet[2452]: E0513 00:09:16.993578 2452 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:09:16.993719 kubelet[2452]: E0513 00:09:16.993708 2452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:09:16.993746 kubelet[2452]: W0513 00:09:16.993723 2452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:09:16.993746 kubelet[2452]: E0513 00:09:16.993731 2452 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:09:16.993882 kubelet[2452]: E0513 00:09:16.993872 2452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:09:16.993904 kubelet[2452]: W0513 00:09:16.993882 2452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:09:16.993904 kubelet[2452]: E0513 00:09:16.993890 2452 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:09:17.030450 kubelet[2452]: E0513 00:09:17.030409 2452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:09:17.030450 kubelet[2452]: W0513 00:09:17.030435 2452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:09:17.030450 kubelet[2452]: E0513 00:09:17.030455 2452 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:09:17.030711 kubelet[2452]: E0513 00:09:17.030685 2452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:09:17.030711 kubelet[2452]: W0513 00:09:17.030698 2452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:09:17.030711 kubelet[2452]: E0513 00:09:17.030712 2452 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:09:17.030937 kubelet[2452]: E0513 00:09:17.030911 2452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:09:17.030937 kubelet[2452]: W0513 00:09:17.030924 2452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:09:17.031015 kubelet[2452]: E0513 00:09:17.030938 2452 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:09:17.031155 kubelet[2452]: E0513 00:09:17.031133 2452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:09:17.031155 kubelet[2452]: W0513 00:09:17.031146 2452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:09:17.031206 kubelet[2452]: E0513 00:09:17.031159 2452 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:09:17.031329 kubelet[2452]: E0513 00:09:17.031318 2452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:09:17.031365 kubelet[2452]: W0513 00:09:17.031330 2452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:09:17.031365 kubelet[2452]: E0513 00:09:17.031342 2452 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:09:17.031544 kubelet[2452]: E0513 00:09:17.031523 2452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:09:17.031544 kubelet[2452]: W0513 00:09:17.031535 2452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:09:17.031591 kubelet[2452]: E0513 00:09:17.031547 2452 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:09:17.031738 kubelet[2452]: E0513 00:09:17.031726 2452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:09:17.031738 kubelet[2452]: W0513 00:09:17.031736 2452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:09:17.031804 kubelet[2452]: E0513 00:09:17.031764 2452 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:09:17.031875 kubelet[2452]: E0513 00:09:17.031864 2452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:09:17.031875 kubelet[2452]: W0513 00:09:17.031875 2452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:09:17.031919 kubelet[2452]: E0513 00:09:17.031899 2452 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:09:17.032049 kubelet[2452]: E0513 00:09:17.032037 2452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:09:17.032049 kubelet[2452]: W0513 00:09:17.032047 2452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:09:17.032107 kubelet[2452]: E0513 00:09:17.032060 2452 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:09:17.032214 kubelet[2452]: E0513 00:09:17.032204 2452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:09:17.032239 kubelet[2452]: W0513 00:09:17.032214 2452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:09:17.032239 kubelet[2452]: E0513 00:09:17.032226 2452 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:09:17.032395 kubelet[2452]: E0513 00:09:17.032385 2452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:09:17.032395 kubelet[2452]: W0513 00:09:17.032394 2452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:09:17.032440 kubelet[2452]: E0513 00:09:17.032406 2452 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:09:17.032599 kubelet[2452]: E0513 00:09:17.032590 2452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:09:17.032621 kubelet[2452]: W0513 00:09:17.032600 2452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:09:17.032621 kubelet[2452]: E0513 00:09:17.032611 2452 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:09:17.032868 kubelet[2452]: E0513 00:09:17.032848 2452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:09:17.032868 kubelet[2452]: W0513 00:09:17.032866 2452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:09:17.032927 kubelet[2452]: E0513 00:09:17.032884 2452 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:09:17.033070 kubelet[2452]: E0513 00:09:17.033059 2452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:09:17.033070 kubelet[2452]: W0513 00:09:17.033069 2452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:09:17.033120 kubelet[2452]: E0513 00:09:17.033081 2452 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:09:17.033270 kubelet[2452]: E0513 00:09:17.033252 2452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:09:17.033307 kubelet[2452]: W0513 00:09:17.033277 2452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:09:17.033332 kubelet[2452]: E0513 00:09:17.033302 2452 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:09:17.033429 kubelet[2452]: E0513 00:09:17.033418 2452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:09:17.033451 kubelet[2452]: W0513 00:09:17.033428 2452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:09:17.033475 kubelet[2452]: E0513 00:09:17.033451 2452 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:09:17.033589 kubelet[2452]: E0513 00:09:17.033579 2452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:09:17.033615 kubelet[2452]: W0513 00:09:17.033599 2452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:09:17.033638 kubelet[2452]: E0513 00:09:17.033614 2452 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:09:17.033823 kubelet[2452]: E0513 00:09:17.033813 2452 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:09:17.033844 kubelet[2452]: W0513 00:09:17.033823 2452 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:09:17.033844 kubelet[2452]: E0513 00:09:17.033832 2452 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:09:17.617424 containerd[1437]: time="2025-05-13T00:09:17.617370195Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:09:17.617873 containerd[1437]: time="2025-05-13T00:09:17.617836683Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3: active requests=0, bytes read=5122903" May 13 00:09:17.618519 containerd[1437]: time="2025-05-13T00:09:17.618493053Z" level=info msg="ImageCreate event name:\"sha256:dd8e710a588cc6f5834c4d84f7e12458efae593d3dfe527ca9e757c89239ecb8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:09:17.620411 containerd[1437]: time="2025-05-13T00:09:17.620371644Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:09:17.621771 containerd[1437]: time="2025-05-13T00:09:17.621730827Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" with image id \"sha256:dd8e710a588cc6f5834c4d84f7e12458efae593d3dfe527ca9e757c89239ecb8\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\", size \"6492045\" in 1.055319103s" May 13 00:09:17.621801 containerd[1437]: time="2025-05-13T00:09:17.621775107Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" returns image reference \"sha256:dd8e710a588cc6f5834c4d84f7e12458efae593d3dfe527ca9e757c89239ecb8\"" May 13 00:09:17.624476 containerd[1437]: time="2025-05-13T00:09:17.624438431Z" level=info msg="CreateContainer within sandbox \"0b57e1ea2ead34c25561c841837f5f37b675690b5966a14e3db744be170256a0\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 13 00:09:17.649472 containerd[1437]: time="2025-05-13T00:09:17.649396001Z" level=info msg="CreateContainer within sandbox \"0b57e1ea2ead34c25561c841837f5f37b675690b5966a14e3db744be170256a0\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"3adcda9f4eebd2fa05cbc77d494f0edcae9c5385b311f005bbf83ce5da430e9d\"" May 13 00:09:17.650301 containerd[1437]: time="2025-05-13T00:09:17.650250815Z" level=info msg="StartContainer for \"3adcda9f4eebd2fa05cbc77d494f0edcae9c5385b311f005bbf83ce5da430e9d\"" May 13 00:09:17.684516 systemd[1]: Started cri-containerd-3adcda9f4eebd2fa05cbc77d494f0edcae9c5385b311f005bbf83ce5da430e9d.scope - libcontainer container 3adcda9f4eebd2fa05cbc77d494f0edcae9c5385b311f005bbf83ce5da430e9d. May 13 00:09:17.727409 containerd[1437]: time="2025-05-13T00:09:17.726564110Z" level=info msg="StartContainer for \"3adcda9f4eebd2fa05cbc77d494f0edcae9c5385b311f005bbf83ce5da430e9d\" returns successfully" May 13 00:09:17.745074 systemd[1]: cri-containerd-3adcda9f4eebd2fa05cbc77d494f0edcae9c5385b311f005bbf83ce5da430e9d.scope: Deactivated successfully. May 13 00:09:17.774367 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3adcda9f4eebd2fa05cbc77d494f0edcae9c5385b311f005bbf83ce5da430e9d-rootfs.mount: Deactivated successfully. May 13 00:09:17.793841 containerd[1437]: time="2025-05-13T00:09:17.786631097Z" level=info msg="shim disconnected" id=3adcda9f4eebd2fa05cbc77d494f0edcae9c5385b311f005bbf83ce5da430e9d namespace=k8s.io May 13 00:09:17.793841 containerd[1437]: time="2025-05-13T00:09:17.793836415Z" level=warning msg="cleaning up after shim disconnected" id=3adcda9f4eebd2fa05cbc77d494f0edcae9c5385b311f005bbf83ce5da430e9d namespace=k8s.io May 13 00:09:17.793841 containerd[1437]: time="2025-05-13T00:09:17.793853016Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 13 00:09:17.895002 kubelet[2452]: E0513 00:09:17.894866 2452 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bztmm" podUID="f17de286-e9a7-4976-97e8-a35d9794c721" May 13 00:09:17.975449 kubelet[2452]: I0513 00:09:17.975417 2452 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 13 00:09:17.975800 kubelet[2452]: E0513 00:09:17.975699 2452 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:09:17.976031 kubelet[2452]: E0513 00:09:17.976005 2452 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:09:17.980004 containerd[1437]: time="2025-05-13T00:09:17.979917074Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\"" May 13 00:09:19.895486 kubelet[2452]: E0513 00:09:19.895439 2452 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bztmm" podUID="f17de286-e9a7-4976-97e8-a35d9794c721" May 13 00:09:20.224912 containerd[1437]: time="2025-05-13T00:09:20.224860976Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:09:20.225880 containerd[1437]: time="2025-05-13T00:09:20.225679668Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.3: active requests=0, bytes read=91256270" May 13 00:09:20.226722 containerd[1437]: time="2025-05-13T00:09:20.226653281Z" level=info msg="ImageCreate event name:\"sha256:add6372545fb406bb017769f222d84c50549ce13e3b19f1fbaee3d8a4aaef627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:09:20.229609 containerd[1437]: time="2025-05-13T00:09:20.229331960Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:09:20.230246 containerd[1437]: time="2025-05-13T00:09:20.230110091Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.3\" with image id \"sha256:add6372545fb406bb017769f222d84c50549ce13e3b19f1fbaee3d8a4aaef627\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\", size \"92625452\" in 2.249666809s" May 13 00:09:20.230246 containerd[1437]: time="2025-05-13T00:09:20.230146331Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\" returns image reference \"sha256:add6372545fb406bb017769f222d84c50549ce13e3b19f1fbaee3d8a4aaef627\"" May 13 00:09:20.233664 containerd[1437]: time="2025-05-13T00:09:20.233621661Z" level=info msg="CreateContainer within sandbox \"0b57e1ea2ead34c25561c841837f5f37b675690b5966a14e3db744be170256a0\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 13 00:09:20.247566 containerd[1437]: time="2025-05-13T00:09:20.247434378Z" level=info msg="CreateContainer within sandbox \"0b57e1ea2ead34c25561c841837f5f37b675690b5966a14e3db744be170256a0\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"4e1e320cc9e262561674d32710224e665292444d4bd2122f6bb53db6e3b17262\"" May 13 00:09:20.248198 containerd[1437]: time="2025-05-13T00:09:20.248012826Z" level=info msg="StartContainer for \"4e1e320cc9e262561674d32710224e665292444d4bd2122f6bb53db6e3b17262\"" May 13 00:09:20.276459 systemd[1]: Started cri-containerd-4e1e320cc9e262561674d32710224e665292444d4bd2122f6bb53db6e3b17262.scope - libcontainer container 4e1e320cc9e262561674d32710224e665292444d4bd2122f6bb53db6e3b17262. May 13 00:09:20.300127 containerd[1437]: time="2025-05-13T00:09:20.300082368Z" level=info msg="StartContainer for \"4e1e320cc9e262561674d32710224e665292444d4bd2122f6bb53db6e3b17262\" returns successfully" May 13 00:09:20.878739 containerd[1437]: time="2025-05-13T00:09:20.878694139Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 13 00:09:20.881195 systemd[1]: cri-containerd-4e1e320cc9e262561674d32710224e665292444d4bd2122f6bb53db6e3b17262.scope: Deactivated successfully. May 13 00:09:20.901016 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4e1e320cc9e262561674d32710224e665292444d4bd2122f6bb53db6e3b17262-rootfs.mount: Deactivated successfully. May 13 00:09:20.942670 containerd[1437]: time="2025-05-13T00:09:20.942609810Z" level=info msg="shim disconnected" id=4e1e320cc9e262561674d32710224e665292444d4bd2122f6bb53db6e3b17262 namespace=k8s.io May 13 00:09:20.942670 containerd[1437]: time="2025-05-13T00:09:20.942667491Z" level=warning msg="cleaning up after shim disconnected" id=4e1e320cc9e262561674d32710224e665292444d4bd2122f6bb53db6e3b17262 namespace=k8s.io May 13 00:09:20.942670 containerd[1437]: time="2025-05-13T00:09:20.942677051Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 13 00:09:20.969218 kubelet[2452]: I0513 00:09:20.968978 2452 kubelet_node_status.go:488] "Fast updating node status as it just became ready" May 13 00:09:20.983871 kubelet[2452]: E0513 00:09:20.983747 2452 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:09:20.987937 containerd[1437]: time="2025-05-13T00:09:20.987304288Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\"" May 13 00:09:21.002161 kubelet[2452]: W0513 00:09:21.001796 2452 reflector.go:561] object-"calico-apiserver"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "calico-apiserver": no relationship found between node 'localhost' and this object May 13 00:09:21.002161 kubelet[2452]: W0513 00:09:21.001877 2452 reflector.go:561] object-"calico-apiserver"/"calico-apiserver-certs": failed to list *v1.Secret: secrets "calico-apiserver-certs" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "calico-apiserver": no relationship found between node 'localhost' and this object May 13 00:09:21.009380 systemd[1]: Created slice kubepods-burstable-podefa396e0_d0f8_4e2e_92a4_a9b755e3f9de.slice - libcontainer container kubepods-burstable-podefa396e0_d0f8_4e2e_92a4_a9b755e3f9de.slice. May 13 00:09:21.020546 systemd[1]: Created slice kubepods-burstable-poda77c4869_daeb_485e_a903_8ffb6cd275cb.slice - libcontainer container kubepods-burstable-poda77c4869_daeb_485e_a903_8ffb6cd275cb.slice. May 13 00:09:21.025296 kubelet[2452]: E0513 00:09:21.024946 2452 reflector.go:158] "Unhandled Error" err="object-\"calico-apiserver\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:localhost\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-apiserver\": no relationship found between node 'localhost' and this object" logger="UnhandledError" May 13 00:09:21.025296 kubelet[2452]: E0513 00:09:21.025114 2452 reflector.go:158] "Unhandled Error" err="object-\"calico-apiserver\"/\"calico-apiserver-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"calico-apiserver-certs\" is forbidden: User \"system:node:localhost\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-apiserver\": no relationship found between node 'localhost' and this object" logger="UnhandledError" May 13 00:09:21.029952 systemd[1]: Created slice kubepods-besteffort-podabf96a25_9422_4227_9da0_78c6d9df4e1e.slice - libcontainer container kubepods-besteffort-podabf96a25_9422_4227_9da0_78c6d9df4e1e.slice. May 13 00:09:21.036783 systemd[1]: Created slice kubepods-besteffort-pod9a6f6bd6_2495_464c_927f_4c4a69a0b8bc.slice - libcontainer container kubepods-besteffort-pod9a6f6bd6_2495_464c_927f_4c4a69a0b8bc.slice. May 13 00:09:21.041464 systemd[1]: Created slice kubepods-besteffort-pod9eb635b5_09fc_4daa_ae36_54b042b695f8.slice - libcontainer container kubepods-besteffort-pod9eb635b5_09fc_4daa_ae36_54b042b695f8.slice. May 13 00:09:21.163001 kubelet[2452]: I0513 00:09:21.162857 2452 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9eb635b5-09fc-4daa-ae36-54b042b695f8-tigera-ca-bundle\") pod \"calico-kube-controllers-7d84469565-2x9zl\" (UID: \"9eb635b5-09fc-4daa-ae36-54b042b695f8\") " pod="calico-system/calico-kube-controllers-7d84469565-2x9zl" May 13 00:09:21.163001 kubelet[2452]: I0513 00:09:21.162910 2452 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cvmnr\" (UniqueName: \"kubernetes.io/projected/abf96a25-9422-4227-9da0-78c6d9df4e1e-kube-api-access-cvmnr\") pod \"calico-apiserver-78445c69b9-wbfwl\" (UID: \"abf96a25-9422-4227-9da0-78c6d9df4e1e\") " pod="calico-apiserver/calico-apiserver-78445c69b9-wbfwl" May 13 00:09:21.163001 kubelet[2452]: I0513 00:09:21.162928 2452 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/efa396e0-d0f8-4e2e-92a4-a9b755e3f9de-config-volume\") pod \"coredns-6f6b679f8f-dsktj\" (UID: \"efa396e0-d0f8-4e2e-92a4-a9b755e3f9de\") " pod="kube-system/coredns-6f6b679f8f-dsktj" May 13 00:09:21.163001 kubelet[2452]: I0513 00:09:21.162953 2452 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/abf96a25-9422-4227-9da0-78c6d9df4e1e-calico-apiserver-certs\") pod \"calico-apiserver-78445c69b9-wbfwl\" (UID: \"abf96a25-9422-4227-9da0-78c6d9df4e1e\") " pod="calico-apiserver/calico-apiserver-78445c69b9-wbfwl" May 13 00:09:21.163001 kubelet[2452]: I0513 00:09:21.162970 2452 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-59fkc\" (UniqueName: \"kubernetes.io/projected/9a6f6bd6-2495-464c-927f-4c4a69a0b8bc-kube-api-access-59fkc\") pod \"calico-apiserver-78445c69b9-vz5v9\" (UID: \"9a6f6bd6-2495-464c-927f-4c4a69a0b8bc\") " pod="calico-apiserver/calico-apiserver-78445c69b9-vz5v9" May 13 00:09:21.163229 kubelet[2452]: I0513 00:09:21.162987 2452 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a77c4869-daeb-485e-a903-8ffb6cd275cb-config-volume\") pod \"coredns-6f6b679f8f-ztnr2\" (UID: \"a77c4869-daeb-485e-a903-8ffb6cd275cb\") " pod="kube-system/coredns-6f6b679f8f-ztnr2" May 13 00:09:21.163229 kubelet[2452]: I0513 00:09:21.163011 2452 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8txd\" (UniqueName: \"kubernetes.io/projected/efa396e0-d0f8-4e2e-92a4-a9b755e3f9de-kube-api-access-k8txd\") pod \"coredns-6f6b679f8f-dsktj\" (UID: \"efa396e0-d0f8-4e2e-92a4-a9b755e3f9de\") " pod="kube-system/coredns-6f6b679f8f-dsktj" May 13 00:09:21.163993 kubelet[2452]: I0513 00:09:21.163359 2452 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/9a6f6bd6-2495-464c-927f-4c4a69a0b8bc-calico-apiserver-certs\") pod \"calico-apiserver-78445c69b9-vz5v9\" (UID: \"9a6f6bd6-2495-464c-927f-4c4a69a0b8bc\") " pod="calico-apiserver/calico-apiserver-78445c69b9-vz5v9" May 13 00:09:21.163993 kubelet[2452]: I0513 00:09:21.163396 2452 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8zf8m\" (UniqueName: \"kubernetes.io/projected/9eb635b5-09fc-4daa-ae36-54b042b695f8-kube-api-access-8zf8m\") pod \"calico-kube-controllers-7d84469565-2x9zl\" (UID: \"9eb635b5-09fc-4daa-ae36-54b042b695f8\") " pod="calico-system/calico-kube-controllers-7d84469565-2x9zl" May 13 00:09:21.163993 kubelet[2452]: I0513 00:09:21.163432 2452 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5rgl\" (UniqueName: \"kubernetes.io/projected/a77c4869-daeb-485e-a903-8ffb6cd275cb-kube-api-access-v5rgl\") pod \"coredns-6f6b679f8f-ztnr2\" (UID: \"a77c4869-daeb-485e-a903-8ffb6cd275cb\") " pod="kube-system/coredns-6f6b679f8f-ztnr2" May 13 00:09:21.315548 kubelet[2452]: E0513 00:09:21.315505 2452 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:09:21.317030 containerd[1437]: time="2025-05-13T00:09:21.316973347Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-dsktj,Uid:efa396e0-d0f8-4e2e-92a4-a9b755e3f9de,Namespace:kube-system,Attempt:0,}" May 13 00:09:21.328031 kubelet[2452]: E0513 00:09:21.327492 2452 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:09:21.328985 containerd[1437]: time="2025-05-13T00:09:21.327964377Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-ztnr2,Uid:a77c4869-daeb-485e-a903-8ffb6cd275cb,Namespace:kube-system,Attempt:0,}" May 13 00:09:21.350707 containerd[1437]: time="2025-05-13T00:09:21.347538684Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7d84469565-2x9zl,Uid:9eb635b5-09fc-4daa-ae36-54b042b695f8,Namespace:calico-system,Attempt:0,}" May 13 00:09:21.661176 containerd[1437]: time="2025-05-13T00:09:21.661131036Z" level=error msg="Failed to destroy network for sandbox \"b40f59446a88db83e41c19037c6d6c91c3c1951d19165cc54e1343407c9994b8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:09:21.661937 containerd[1437]: time="2025-05-13T00:09:21.661896566Z" level=error msg="encountered an error cleaning up failed sandbox \"b40f59446a88db83e41c19037c6d6c91c3c1951d19165cc54e1343407c9994b8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:09:21.662214 containerd[1437]: time="2025-05-13T00:09:21.662088729Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-dsktj,Uid:efa396e0-d0f8-4e2e-92a4-a9b755e3f9de,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b40f59446a88db83e41c19037c6d6c91c3c1951d19165cc54e1343407c9994b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:09:21.663327 containerd[1437]: time="2025-05-13T00:09:21.663291425Z" level=error msg="Failed to destroy network for sandbox \"c69caa242909d2850bc8a227020e4d5c9a7fefe22f2b8af6f88d4b779938c57e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:09:21.663627 kubelet[2452]: E0513 00:09:21.663569 2452 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b40f59446a88db83e41c19037c6d6c91c3c1951d19165cc54e1343407c9994b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:09:21.663721 kubelet[2452]: E0513 00:09:21.663666 2452 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b40f59446a88db83e41c19037c6d6c91c3c1951d19165cc54e1343407c9994b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-dsktj" May 13 00:09:21.663721 kubelet[2452]: E0513 00:09:21.663687 2452 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b40f59446a88db83e41c19037c6d6c91c3c1951d19165cc54e1343407c9994b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-dsktj" May 13 00:09:21.664548 kubelet[2452]: E0513 00:09:21.663790 2452 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-dsktj_kube-system(efa396e0-d0f8-4e2e-92a4-a9b755e3f9de)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-dsktj_kube-system(efa396e0-d0f8-4e2e-92a4-a9b755e3f9de)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b40f59446a88db83e41c19037c6d6c91c3c1951d19165cc54e1343407c9994b8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-dsktj" podUID="efa396e0-d0f8-4e2e-92a4-a9b755e3f9de" May 13 00:09:21.664663 containerd[1437]: time="2025-05-13T00:09:21.664033395Z" level=error msg="encountered an error cleaning up failed sandbox \"c69caa242909d2850bc8a227020e4d5c9a7fefe22f2b8af6f88d4b779938c57e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:09:21.664733 containerd[1437]: time="2025-05-13T00:09:21.664095796Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-ztnr2,Uid:a77c4869-daeb-485e-a903-8ffb6cd275cb,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c69caa242909d2850bc8a227020e4d5c9a7fefe22f2b8af6f88d4b779938c57e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:09:21.665531 kubelet[2452]: E0513 00:09:21.665500 2452 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c69caa242909d2850bc8a227020e4d5c9a7fefe22f2b8af6f88d4b779938c57e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:09:21.665677 kubelet[2452]: E0513 00:09:21.665546 2452 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c69caa242909d2850bc8a227020e4d5c9a7fefe22f2b8af6f88d4b779938c57e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-ztnr2" May 13 00:09:21.665677 kubelet[2452]: E0513 00:09:21.665564 2452 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c69caa242909d2850bc8a227020e4d5c9a7fefe22f2b8af6f88d4b779938c57e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-ztnr2" May 13 00:09:21.665677 kubelet[2452]: E0513 00:09:21.665604 2452 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-ztnr2_kube-system(a77c4869-daeb-485e-a903-8ffb6cd275cb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-ztnr2_kube-system(a77c4869-daeb-485e-a903-8ffb6cd275cb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c69caa242909d2850bc8a227020e4d5c9a7fefe22f2b8af6f88d4b779938c57e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-ztnr2" podUID="a77c4869-daeb-485e-a903-8ffb6cd275cb" May 13 00:09:21.685622 containerd[1437]: time="2025-05-13T00:09:21.685573969Z" level=error msg="Failed to destroy network for sandbox \"79bdb8a57c6bef7a8f2d085c160d7ec740cc530f2f3fc2579157d5a6ca0850d3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:09:21.685940 containerd[1437]: time="2025-05-13T00:09:21.685911693Z" level=error msg="encountered an error cleaning up failed sandbox \"79bdb8a57c6bef7a8f2d085c160d7ec740cc530f2f3fc2579157d5a6ca0850d3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:09:21.685981 containerd[1437]: time="2025-05-13T00:09:21.685963934Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7d84469565-2x9zl,Uid:9eb635b5-09fc-4daa-ae36-54b042b695f8,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"79bdb8a57c6bef7a8f2d085c160d7ec740cc530f2f3fc2579157d5a6ca0850d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:09:21.686226 kubelet[2452]: E0513 00:09:21.686184 2452 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"79bdb8a57c6bef7a8f2d085c160d7ec740cc530f2f3fc2579157d5a6ca0850d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:09:21.686295 kubelet[2452]: E0513 00:09:21.686243 2452 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"79bdb8a57c6bef7a8f2d085c160d7ec740cc530f2f3fc2579157d5a6ca0850d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7d84469565-2x9zl" May 13 00:09:21.686295 kubelet[2452]: E0513 00:09:21.686271 2452 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"79bdb8a57c6bef7a8f2d085c160d7ec740cc530f2f3fc2579157d5a6ca0850d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7d84469565-2x9zl" May 13 00:09:21.686357 kubelet[2452]: E0513 00:09:21.686315 2452 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7d84469565-2x9zl_calico-system(9eb635b5-09fc-4daa-ae36-54b042b695f8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7d84469565-2x9zl_calico-system(9eb635b5-09fc-4daa-ae36-54b042b695f8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"79bdb8a57c6bef7a8f2d085c160d7ec740cc530f2f3fc2579157d5a6ca0850d3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7d84469565-2x9zl" podUID="9eb635b5-09fc-4daa-ae36-54b042b695f8" May 13 00:09:21.901842 systemd[1]: Created slice kubepods-besteffort-podf17de286_e9a7_4976_97e8_a35d9794c721.slice - libcontainer container kubepods-besteffort-podf17de286_e9a7_4976_97e8_a35d9794c721.slice. May 13 00:09:21.919696 containerd[1437]: time="2025-05-13T00:09:21.919537796Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bztmm,Uid:f17de286-e9a7-4976-97e8-a35d9794c721,Namespace:calico-system,Attempt:0,}" May 13 00:09:21.989180 kubelet[2452]: I0513 00:09:21.988704 2452 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="79bdb8a57c6bef7a8f2d085c160d7ec740cc530f2f3fc2579157d5a6ca0850d3" May 13 00:09:21.992875 containerd[1437]: time="2025-05-13T00:09:21.992813194Z" level=info msg="StopPodSandbox for \"79bdb8a57c6bef7a8f2d085c160d7ec740cc530f2f3fc2579157d5a6ca0850d3\"" May 13 00:09:21.993051 containerd[1437]: time="2025-05-13T00:09:21.993025437Z" level=info msg="Ensure that sandbox 79bdb8a57c6bef7a8f2d085c160d7ec740cc530f2f3fc2579157d5a6ca0850d3 in task-service has been cleanup successfully" May 13 00:09:21.999230 kubelet[2452]: I0513 00:09:21.999187 2452 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b40f59446a88db83e41c19037c6d6c91c3c1951d19165cc54e1343407c9994b8" May 13 00:09:21.999977 containerd[1437]: time="2025-05-13T00:09:21.999932491Z" level=info msg="StopPodSandbox for \"b40f59446a88db83e41c19037c6d6c91c3c1951d19165cc54e1343407c9994b8\"" May 13 00:09:22.000179 containerd[1437]: time="2025-05-13T00:09:22.000133654Z" level=info msg="Ensure that sandbox b40f59446a88db83e41c19037c6d6c91c3c1951d19165cc54e1343407c9994b8 in task-service has been cleanup successfully" May 13 00:09:22.008642 kubelet[2452]: I0513 00:09:22.008546 2452 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c69caa242909d2850bc8a227020e4d5c9a7fefe22f2b8af6f88d4b779938c57e" May 13 00:09:22.011710 containerd[1437]: time="2025-05-13T00:09:22.011658964Z" level=info msg="StopPodSandbox for \"c69caa242909d2850bc8a227020e4d5c9a7fefe22f2b8af6f88d4b779938c57e\"" May 13 00:09:22.013936 containerd[1437]: time="2025-05-13T00:09:22.013698150Z" level=info msg="Ensure that sandbox c69caa242909d2850bc8a227020e4d5c9a7fefe22f2b8af6f88d4b779938c57e in task-service has been cleanup successfully" May 13 00:09:22.067416 containerd[1437]: time="2025-05-13T00:09:22.067354569Z" level=error msg="StopPodSandbox for \"79bdb8a57c6bef7a8f2d085c160d7ec740cc530f2f3fc2579157d5a6ca0850d3\" failed" error="failed to destroy network for sandbox \"79bdb8a57c6bef7a8f2d085c160d7ec740cc530f2f3fc2579157d5a6ca0850d3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:09:22.068132 kubelet[2452]: E0513 00:09:22.067624 2452 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"79bdb8a57c6bef7a8f2d085c160d7ec740cc530f2f3fc2579157d5a6ca0850d3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="79bdb8a57c6bef7a8f2d085c160d7ec740cc530f2f3fc2579157d5a6ca0850d3" May 13 00:09:22.068132 kubelet[2452]: E0513 00:09:22.067697 2452 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"79bdb8a57c6bef7a8f2d085c160d7ec740cc530f2f3fc2579157d5a6ca0850d3"} May 13 00:09:22.068132 kubelet[2452]: E0513 00:09:22.067758 2452 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9eb635b5-09fc-4daa-ae36-54b042b695f8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"79bdb8a57c6bef7a8f2d085c160d7ec740cc530f2f3fc2579157d5a6ca0850d3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 13 00:09:22.068132 kubelet[2452]: E0513 00:09:22.067779 2452 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9eb635b5-09fc-4daa-ae36-54b042b695f8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"79bdb8a57c6bef7a8f2d085c160d7ec740cc530f2f3fc2579157d5a6ca0850d3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7d84469565-2x9zl" podUID="9eb635b5-09fc-4daa-ae36-54b042b695f8" May 13 00:09:22.077546 containerd[1437]: time="2025-05-13T00:09:22.077466061Z" level=error msg="StopPodSandbox for \"b40f59446a88db83e41c19037c6d6c91c3c1951d19165cc54e1343407c9994b8\" failed" error="failed to destroy network for sandbox \"b40f59446a88db83e41c19037c6d6c91c3c1951d19165cc54e1343407c9994b8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:09:22.077778 kubelet[2452]: E0513 00:09:22.077724 2452 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b40f59446a88db83e41c19037c6d6c91c3c1951d19165cc54e1343407c9994b8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b40f59446a88db83e41c19037c6d6c91c3c1951d19165cc54e1343407c9994b8" May 13 00:09:22.077832 kubelet[2452]: E0513 00:09:22.077786 2452 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b40f59446a88db83e41c19037c6d6c91c3c1951d19165cc54e1343407c9994b8"} May 13 00:09:22.077857 kubelet[2452]: E0513 00:09:22.077827 2452 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"efa396e0-d0f8-4e2e-92a4-a9b755e3f9de\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b40f59446a88db83e41c19037c6d6c91c3c1951d19165cc54e1343407c9994b8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 13 00:09:22.077911 kubelet[2452]: E0513 00:09:22.077856 2452 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"efa396e0-d0f8-4e2e-92a4-a9b755e3f9de\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b40f59446a88db83e41c19037c6d6c91c3c1951d19165cc54e1343407c9994b8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-dsktj" podUID="efa396e0-d0f8-4e2e-92a4-a9b755e3f9de" May 13 00:09:22.090155 containerd[1437]: time="2025-05-13T00:09:22.090102706Z" level=error msg="StopPodSandbox for \"c69caa242909d2850bc8a227020e4d5c9a7fefe22f2b8af6f88d4b779938c57e\" failed" error="failed to destroy network for sandbox \"c69caa242909d2850bc8a227020e4d5c9a7fefe22f2b8af6f88d4b779938c57e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:09:22.090588 kubelet[2452]: E0513 00:09:22.090450 2452 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c69caa242909d2850bc8a227020e4d5c9a7fefe22f2b8af6f88d4b779938c57e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c69caa242909d2850bc8a227020e4d5c9a7fefe22f2b8af6f88d4b779938c57e" May 13 00:09:22.090588 kubelet[2452]: E0513 00:09:22.090499 2452 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c69caa242909d2850bc8a227020e4d5c9a7fefe22f2b8af6f88d4b779938c57e"} May 13 00:09:22.090588 kubelet[2452]: E0513 00:09:22.090535 2452 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a77c4869-daeb-485e-a903-8ffb6cd275cb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c69caa242909d2850bc8a227020e4d5c9a7fefe22f2b8af6f88d4b779938c57e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 13 00:09:22.090588 kubelet[2452]: E0513 00:09:22.090557 2452 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a77c4869-daeb-485e-a903-8ffb6cd275cb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c69caa242909d2850bc8a227020e4d5c9a7fefe22f2b8af6f88d4b779938c57e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-ztnr2" podUID="a77c4869-daeb-485e-a903-8ffb6cd275cb" May 13 00:09:22.106619 containerd[1437]: time="2025-05-13T00:09:22.106557680Z" level=error msg="Failed to destroy network for sandbox \"57adbb0a67d29b0bf4ad1796865ac693e2cd7f97425c4bd1488202a63caa1a67\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:09:22.107329 containerd[1437]: time="2025-05-13T00:09:22.107288810Z" level=error msg="encountered an error cleaning up failed sandbox \"57adbb0a67d29b0bf4ad1796865ac693e2cd7f97425c4bd1488202a63caa1a67\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:09:22.107406 containerd[1437]: time="2025-05-13T00:09:22.107347650Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bztmm,Uid:f17de286-e9a7-4976-97e8-a35d9794c721,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"57adbb0a67d29b0bf4ad1796865ac693e2cd7f97425c4bd1488202a63caa1a67\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:09:22.107600 kubelet[2452]: E0513 00:09:22.107560 2452 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"57adbb0a67d29b0bf4ad1796865ac693e2cd7f97425c4bd1488202a63caa1a67\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:09:22.107645 kubelet[2452]: E0513 00:09:22.107626 2452 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"57adbb0a67d29b0bf4ad1796865ac693e2cd7f97425c4bd1488202a63caa1a67\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-bztmm" May 13 00:09:22.107681 kubelet[2452]: E0513 00:09:22.107645 2452 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"57adbb0a67d29b0bf4ad1796865ac693e2cd7f97425c4bd1488202a63caa1a67\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-bztmm" May 13 00:09:22.107706 kubelet[2452]: E0513 00:09:22.107687 2452 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-bztmm_calico-system(f17de286-e9a7-4976-97e8-a35d9794c721)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-bztmm_calico-system(f17de286-e9a7-4976-97e8-a35d9794c721)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"57adbb0a67d29b0bf4ad1796865ac693e2cd7f97425c4bd1488202a63caa1a67\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-bztmm" podUID="f17de286-e9a7-4976-97e8-a35d9794c721" May 13 00:09:22.279771 kubelet[2452]: E0513 00:09:22.279716 2452 projected.go:288] Couldn't get configMap calico-apiserver/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition May 13 00:09:22.282286 kubelet[2452]: E0513 00:09:22.281881 2452 projected.go:288] Couldn't get configMap calico-apiserver/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition May 13 00:09:22.285256 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c69caa242909d2850bc8a227020e4d5c9a7fefe22f2b8af6f88d4b779938c57e-shm.mount: Deactivated successfully. May 13 00:09:22.285399 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b40f59446a88db83e41c19037c6d6c91c3c1951d19165cc54e1343407c9994b8-shm.mount: Deactivated successfully. May 13 00:09:22.295098 kubelet[2452]: E0513 00:09:22.294531 2452 projected.go:194] Error preparing data for projected volume kube-api-access-cvmnr for pod calico-apiserver/calico-apiserver-78445c69b9-wbfwl: failed to sync configmap cache: timed out waiting for the condition May 13 00:09:22.295218 kubelet[2452]: E0513 00:09:22.293725 2452 projected.go:194] Error preparing data for projected volume kube-api-access-59fkc for pod calico-apiserver/calico-apiserver-78445c69b9-vz5v9: failed to sync configmap cache: timed out waiting for the condition May 13 00:09:22.301704 kubelet[2452]: E0513 00:09:22.300999 2452 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9a6f6bd6-2495-464c-927f-4c4a69a0b8bc-kube-api-access-59fkc podName:9a6f6bd6-2495-464c-927f-4c4a69a0b8bc nodeName:}" failed. No retries permitted until 2025-05-13 00:09:22.795168697 +0000 UTC m=+21.984543297 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-59fkc" (UniqueName: "kubernetes.io/projected/9a6f6bd6-2495-464c-927f-4c4a69a0b8bc-kube-api-access-59fkc") pod "calico-apiserver-78445c69b9-vz5v9" (UID: "9a6f6bd6-2495-464c-927f-4c4a69a0b8bc") : failed to sync configmap cache: timed out waiting for the condition May 13 00:09:22.301704 kubelet[2452]: E0513 00:09:22.301087 2452 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/abf96a25-9422-4227-9da0-78c6d9df4e1e-kube-api-access-cvmnr podName:abf96a25-9422-4227-9da0-78c6d9df4e1e nodeName:}" failed. No retries permitted until 2025-05-13 00:09:22.801057013 +0000 UTC m=+21.990431613 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cvmnr" (UniqueName: "kubernetes.io/projected/abf96a25-9422-4227-9da0-78c6d9df4e1e-kube-api-access-cvmnr") pod "calico-apiserver-78445c69b9-wbfwl" (UID: "abf96a25-9422-4227-9da0-78c6d9df4e1e") : failed to sync configmap cache: timed out waiting for the condition May 13 00:09:23.012397 kubelet[2452]: I0513 00:09:23.012362 2452 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="57adbb0a67d29b0bf4ad1796865ac693e2cd7f97425c4bd1488202a63caa1a67" May 13 00:09:23.013955 containerd[1437]: time="2025-05-13T00:09:23.013526206Z" level=info msg="StopPodSandbox for \"57adbb0a67d29b0bf4ad1796865ac693e2cd7f97425c4bd1488202a63caa1a67\"" May 13 00:09:23.013955 containerd[1437]: time="2025-05-13T00:09:23.013709728Z" level=info msg="Ensure that sandbox 57adbb0a67d29b0bf4ad1796865ac693e2cd7f97425c4bd1488202a63caa1a67 in task-service has been cleanup successfully" May 13 00:09:23.043540 containerd[1437]: time="2025-05-13T00:09:23.043491100Z" level=error msg="StopPodSandbox for \"57adbb0a67d29b0bf4ad1796865ac693e2cd7f97425c4bd1488202a63caa1a67\" failed" error="failed to destroy network for sandbox \"57adbb0a67d29b0bf4ad1796865ac693e2cd7f97425c4bd1488202a63caa1a67\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:09:23.043984 kubelet[2452]: E0513 00:09:23.043733 2452 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"57adbb0a67d29b0bf4ad1796865ac693e2cd7f97425c4bd1488202a63caa1a67\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="57adbb0a67d29b0bf4ad1796865ac693e2cd7f97425c4bd1488202a63caa1a67" May 13 00:09:23.043984 kubelet[2452]: E0513 00:09:23.043796 2452 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"57adbb0a67d29b0bf4ad1796865ac693e2cd7f97425c4bd1488202a63caa1a67"} May 13 00:09:23.043984 kubelet[2452]: E0513 00:09:23.043829 2452 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f17de286-e9a7-4976-97e8-a35d9794c721\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"57adbb0a67d29b0bf4ad1796865ac693e2cd7f97425c4bd1488202a63caa1a67\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 13 00:09:23.043984 kubelet[2452]: E0513 00:09:23.043852 2452 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f17de286-e9a7-4976-97e8-a35d9794c721\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"57adbb0a67d29b0bf4ad1796865ac693e2cd7f97425c4bd1488202a63caa1a67\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-bztmm" podUID="f17de286-e9a7-4976-97e8-a35d9794c721" May 13 00:09:23.133904 containerd[1437]: time="2025-05-13T00:09:23.133826826Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-78445c69b9-wbfwl,Uid:abf96a25-9422-4227-9da0-78c6d9df4e1e,Namespace:calico-apiserver,Attempt:0,}" May 13 00:09:23.140959 containerd[1437]: time="2025-05-13T00:09:23.140872114Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-78445c69b9-vz5v9,Uid:9a6f6bd6-2495-464c-927f-4c4a69a0b8bc,Namespace:calico-apiserver,Attempt:0,}" May 13 00:09:23.274399 containerd[1437]: time="2025-05-13T00:09:23.274230096Z" level=error msg="Failed to destroy network for sandbox \"4c4c90edb0ca19b850c99e21650b813f41c7c951092eac6de249c3c6cbbe200d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:09:23.275781 containerd[1437]: time="2025-05-13T00:09:23.275634674Z" level=error msg="encountered an error cleaning up failed sandbox \"4c4c90edb0ca19b850c99e21650b813f41c7c951092eac6de249c3c6cbbe200d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:09:23.276052 containerd[1437]: time="2025-05-13T00:09:23.276012118Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-78445c69b9-wbfwl,Uid:abf96a25-9422-4227-9da0-78c6d9df4e1e,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4c4c90edb0ca19b850c99e21650b813f41c7c951092eac6de249c3c6cbbe200d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:09:23.276154 containerd[1437]: time="2025-05-13T00:09:23.276039999Z" level=error msg="Failed to destroy network for sandbox \"e8678be89270303df5c1f3c49079f4fd9cc7c27e6bc86107eaa1c2731c904c11\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:09:23.276554 kubelet[2452]: E0513 00:09:23.276511 2452 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4c4c90edb0ca19b850c99e21650b813f41c7c951092eac6de249c3c6cbbe200d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:09:23.276671 kubelet[2452]: E0513 00:09:23.276581 2452 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4c4c90edb0ca19b850c99e21650b813f41c7c951092eac6de249c3c6cbbe200d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-78445c69b9-wbfwl" May 13 00:09:23.276671 kubelet[2452]: E0513 00:09:23.276602 2452 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4c4c90edb0ca19b850c99e21650b813f41c7c951092eac6de249c3c6cbbe200d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-78445c69b9-wbfwl" May 13 00:09:23.276671 kubelet[2452]: E0513 00:09:23.276646 2452 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-78445c69b9-wbfwl_calico-apiserver(abf96a25-9422-4227-9da0-78c6d9df4e1e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-78445c69b9-wbfwl_calico-apiserver(abf96a25-9422-4227-9da0-78c6d9df4e1e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4c4c90edb0ca19b850c99e21650b813f41c7c951092eac6de249c3c6cbbe200d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-78445c69b9-wbfwl" podUID="abf96a25-9422-4227-9da0-78c6d9df4e1e" May 13 00:09:23.276798 containerd[1437]: time="2025-05-13T00:09:23.276612566Z" level=error msg="encountered an error cleaning up failed sandbox \"e8678be89270303df5c1f3c49079f4fd9cc7c27e6bc86107eaa1c2731c904c11\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:09:23.276798 containerd[1437]: time="2025-05-13T00:09:23.276666446Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-78445c69b9-vz5v9,Uid:9a6f6bd6-2495-464c-927f-4c4a69a0b8bc,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e8678be89270303df5c1f3c49079f4fd9cc7c27e6bc86107eaa1c2731c904c11\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:09:23.276874 kubelet[2452]: E0513 00:09:23.276807 2452 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e8678be89270303df5c1f3c49079f4fd9cc7c27e6bc86107eaa1c2731c904c11\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:09:23.278236 kubelet[2452]: E0513 00:09:23.278198 2452 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e8678be89270303df5c1f3c49079f4fd9cc7c27e6bc86107eaa1c2731c904c11\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-78445c69b9-vz5v9" May 13 00:09:23.278353 kubelet[2452]: E0513 00:09:23.278238 2452 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e8678be89270303df5c1f3c49079f4fd9cc7c27e6bc86107eaa1c2731c904c11\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-78445c69b9-vz5v9" May 13 00:09:23.278353 kubelet[2452]: E0513 00:09:23.278311 2452 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-78445c69b9-vz5v9_calico-apiserver(9a6f6bd6-2495-464c-927f-4c4a69a0b8bc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-78445c69b9-vz5v9_calico-apiserver(9a6f6bd6-2495-464c-927f-4c4a69a0b8bc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e8678be89270303df5c1f3c49079f4fd9cc7c27e6bc86107eaa1c2731c904c11\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-78445c69b9-vz5v9" podUID="9a6f6bd6-2495-464c-927f-4c4a69a0b8bc" May 13 00:09:23.283433 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4c4c90edb0ca19b850c99e21650b813f41c7c951092eac6de249c3c6cbbe200d-shm.mount: Deactivated successfully. May 13 00:09:23.964914 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount727973141.mount: Deactivated successfully. May 13 00:09:23.993035 containerd[1437]: time="2025-05-13T00:09:23.992965936Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:09:23.993446 containerd[1437]: time="2025-05-13T00:09:23.993410341Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.3: active requests=0, bytes read=138981893" May 13 00:09:23.994350 containerd[1437]: time="2025-05-13T00:09:23.994315792Z" level=info msg="ImageCreate event name:\"sha256:cdcce3ec4624a24c28cdc07b0ee29ddf6703628edee7452a3f8a8b4816bfd057\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:09:23.996338 containerd[1437]: time="2025-05-13T00:09:23.996272537Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:09:23.996941 containerd[1437]: time="2025-05-13T00:09:23.996910665Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.3\" with image id \"sha256:cdcce3ec4624a24c28cdc07b0ee29ddf6703628edee7452a3f8a8b4816bfd057\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\", size \"138981755\" in 3.009559617s" May 13 00:09:23.996992 containerd[1437]: time="2025-05-13T00:09:23.996943945Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\" returns image reference \"sha256:cdcce3ec4624a24c28cdc07b0ee29ddf6703628edee7452a3f8a8b4816bfd057\"" May 13 00:09:24.005529 containerd[1437]: time="2025-05-13T00:09:24.005438008Z" level=info msg="CreateContainer within sandbox \"0b57e1ea2ead34c25561c841837f5f37b675690b5966a14e3db744be170256a0\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" May 13 00:09:24.014334 kubelet[2452]: I0513 00:09:24.014296 2452 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4c4c90edb0ca19b850c99e21650b813f41c7c951092eac6de249c3c6cbbe200d" May 13 00:09:24.015012 containerd[1437]: time="2025-05-13T00:09:24.014963122Z" level=info msg="StopPodSandbox for \"4c4c90edb0ca19b850c99e21650b813f41c7c951092eac6de249c3c6cbbe200d\"" May 13 00:09:24.015227 containerd[1437]: time="2025-05-13T00:09:24.015129124Z" level=info msg="Ensure that sandbox 4c4c90edb0ca19b850c99e21650b813f41c7c951092eac6de249c3c6cbbe200d in task-service has been cleanup successfully" May 13 00:09:24.018257 kubelet[2452]: I0513 00:09:24.018227 2452 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e8678be89270303df5c1f3c49079f4fd9cc7c27e6bc86107eaa1c2731c904c11" May 13 00:09:24.019315 containerd[1437]: time="2025-05-13T00:09:24.018936929Z" level=info msg="StopPodSandbox for \"e8678be89270303df5c1f3c49079f4fd9cc7c27e6bc86107eaa1c2731c904c11\"" May 13 00:09:24.019315 containerd[1437]: time="2025-05-13T00:09:24.019103171Z" level=info msg="Ensure that sandbox e8678be89270303df5c1f3c49079f4fd9cc7c27e6bc86107eaa1c2731c904c11 in task-service has been cleanup successfully" May 13 00:09:24.033140 containerd[1437]: time="2025-05-13T00:09:24.033093458Z" level=info msg="CreateContainer within sandbox \"0b57e1ea2ead34c25561c841837f5f37b675690b5966a14e3db744be170256a0\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"7960252e0c6bb1902b6f6a1632f6c00e123076e170f3c4beb556239b0021be92\"" May 13 00:09:24.034147 containerd[1437]: time="2025-05-13T00:09:24.034122071Z" level=info msg="StartContainer for \"7960252e0c6bb1902b6f6a1632f6c00e123076e170f3c4beb556239b0021be92\"" May 13 00:09:24.044493 containerd[1437]: time="2025-05-13T00:09:24.044419154Z" level=error msg="StopPodSandbox for \"4c4c90edb0ca19b850c99e21650b813f41c7c951092eac6de249c3c6cbbe200d\" failed" error="failed to destroy network for sandbox \"4c4c90edb0ca19b850c99e21650b813f41c7c951092eac6de249c3c6cbbe200d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:09:24.044864 kubelet[2452]: E0513 00:09:24.044727 2452 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4c4c90edb0ca19b850c99e21650b813f41c7c951092eac6de249c3c6cbbe200d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4c4c90edb0ca19b850c99e21650b813f41c7c951092eac6de249c3c6cbbe200d" May 13 00:09:24.044864 kubelet[2452]: E0513 00:09:24.044777 2452 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4c4c90edb0ca19b850c99e21650b813f41c7c951092eac6de249c3c6cbbe200d"} May 13 00:09:24.044864 kubelet[2452]: E0513 00:09:24.044809 2452 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"abf96a25-9422-4227-9da0-78c6d9df4e1e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4c4c90edb0ca19b850c99e21650b813f41c7c951092eac6de249c3c6cbbe200d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 13 00:09:24.044864 kubelet[2452]: E0513 00:09:24.044836 2452 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"abf96a25-9422-4227-9da0-78c6d9df4e1e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4c4c90edb0ca19b850c99e21650b813f41c7c951092eac6de249c3c6cbbe200d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-78445c69b9-wbfwl" podUID="abf96a25-9422-4227-9da0-78c6d9df4e1e" May 13 00:09:24.049362 containerd[1437]: time="2025-05-13T00:09:24.049182331Z" level=error msg="StopPodSandbox for \"e8678be89270303df5c1f3c49079f4fd9cc7c27e6bc86107eaa1c2731c904c11\" failed" error="failed to destroy network for sandbox \"e8678be89270303df5c1f3c49079f4fd9cc7c27e6bc86107eaa1c2731c904c11\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:09:24.049468 kubelet[2452]: E0513 00:09:24.049397 2452 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e8678be89270303df5c1f3c49079f4fd9cc7c27e6bc86107eaa1c2731c904c11\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e8678be89270303df5c1f3c49079f4fd9cc7c27e6bc86107eaa1c2731c904c11" May 13 00:09:24.049468 kubelet[2452]: E0513 00:09:24.049437 2452 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e8678be89270303df5c1f3c49079f4fd9cc7c27e6bc86107eaa1c2731c904c11"} May 13 00:09:24.049468 kubelet[2452]: E0513 00:09:24.049464 2452 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9a6f6bd6-2495-464c-927f-4c4a69a0b8bc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e8678be89270303df5c1f3c49079f4fd9cc7c27e6bc86107eaa1c2731c904c11\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 13 00:09:24.049619 kubelet[2452]: E0513 00:09:24.049483 2452 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9a6f6bd6-2495-464c-927f-4c4a69a0b8bc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e8678be89270303df5c1f3c49079f4fd9cc7c27e6bc86107eaa1c2731c904c11\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-78445c69b9-vz5v9" podUID="9a6f6bd6-2495-464c-927f-4c4a69a0b8bc" May 13 00:09:24.091475 systemd[1]: Started cri-containerd-7960252e0c6bb1902b6f6a1632f6c00e123076e170f3c4beb556239b0021be92.scope - libcontainer container 7960252e0c6bb1902b6f6a1632f6c00e123076e170f3c4beb556239b0021be92. May 13 00:09:24.121018 containerd[1437]: time="2025-05-13T00:09:24.120864147Z" level=info msg="StartContainer for \"7960252e0c6bb1902b6f6a1632f6c00e123076e170f3c4beb556239b0021be92\" returns successfully" May 13 00:09:24.292300 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. May 13 00:09:24.292406 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. May 13 00:09:25.027614 kubelet[2452]: E0513 00:09:25.027582 2452 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:09:25.060962 kubelet[2452]: I0513 00:09:25.060794 2452 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-267x8" podStartSLOduration=2.170680446 podStartE2EDuration="12.060776301s" podCreationTimestamp="2025-05-13 00:09:13 +0000 UTC" firstStartedPulling="2025-05-13 00:09:14.10761594 +0000 UTC m=+13.296990540" lastFinishedPulling="2025-05-13 00:09:23.997711835 +0000 UTC m=+23.187086395" observedRunningTime="2025-05-13 00:09:25.060614339 +0000 UTC m=+24.249988939" watchObservedRunningTime="2025-05-13 00:09:25.060776301 +0000 UTC m=+24.250150861" May 13 00:09:26.028465 kubelet[2452]: I0513 00:09:26.028424 2452 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 13 00:09:26.028864 kubelet[2452]: E0513 00:09:26.028845 2452 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:09:26.189890 kubelet[2452]: I0513 00:09:26.189848 2452 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 13 00:09:26.190376 kubelet[2452]: E0513 00:09:26.190350 2452 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:09:26.803299 kernel: bpftool[3899]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set May 13 00:09:26.969226 systemd-networkd[1374]: vxlan.calico: Link UP May 13 00:09:26.969238 systemd-networkd[1374]: vxlan.calico: Gained carrier May 13 00:09:27.030874 kubelet[2452]: E0513 00:09:27.030837 2452 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:09:28.491407 systemd-networkd[1374]: vxlan.calico: Gained IPv6LL May 13 00:09:31.897965 systemd[1]: Started sshd@7-10.0.0.16:22-10.0.0.1:56140.service - OpenSSH per-connection server daemon (10.0.0.1:56140). May 13 00:09:31.941464 sshd[3980]: Accepted publickey for core from 10.0.0.1 port 56140 ssh2: RSA SHA256:ilwLBGyeejLKSU0doRti0j2W4iQ88Tp+35jhkd0iwiU May 13 00:09:31.942778 sshd[3980]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:09:31.947284 systemd-logind[1419]: New session 8 of user core. May 13 00:09:31.954420 systemd[1]: Started session-8.scope - Session 8 of User core. May 13 00:09:32.146592 sshd[3980]: pam_unix(sshd:session): session closed for user core May 13 00:09:32.149731 systemd[1]: sshd@7-10.0.0.16:22-10.0.0.1:56140.service: Deactivated successfully. May 13 00:09:32.153412 systemd[1]: session-8.scope: Deactivated successfully. May 13 00:09:32.153981 systemd-logind[1419]: Session 8 logged out. Waiting for processes to exit. May 13 00:09:32.154763 systemd-logind[1419]: Removed session 8. May 13 00:09:32.895658 containerd[1437]: time="2025-05-13T00:09:32.895443383Z" level=info msg="StopPodSandbox for \"c69caa242909d2850bc8a227020e4d5c9a7fefe22f2b8af6f88d4b779938c57e\"" May 13 00:09:33.085597 containerd[1437]: 2025-05-13 00:09:32.982 [INFO][4017] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c69caa242909d2850bc8a227020e4d5c9a7fefe22f2b8af6f88d4b779938c57e" May 13 00:09:33.085597 containerd[1437]: 2025-05-13 00:09:32.983 [INFO][4017] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c69caa242909d2850bc8a227020e4d5c9a7fefe22f2b8af6f88d4b779938c57e" iface="eth0" netns="/var/run/netns/cni-9e262b60-9b33-e65e-1720-9d831e1fd05c" May 13 00:09:33.085597 containerd[1437]: 2025-05-13 00:09:32.983 [INFO][4017] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c69caa242909d2850bc8a227020e4d5c9a7fefe22f2b8af6f88d4b779938c57e" iface="eth0" netns="/var/run/netns/cni-9e262b60-9b33-e65e-1720-9d831e1fd05c" May 13 00:09:33.085597 containerd[1437]: 2025-05-13 00:09:32.984 [INFO][4017] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c69caa242909d2850bc8a227020e4d5c9a7fefe22f2b8af6f88d4b779938c57e" iface="eth0" netns="/var/run/netns/cni-9e262b60-9b33-e65e-1720-9d831e1fd05c" May 13 00:09:33.085597 containerd[1437]: 2025-05-13 00:09:32.984 [INFO][4017] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c69caa242909d2850bc8a227020e4d5c9a7fefe22f2b8af6f88d4b779938c57e" May 13 00:09:33.085597 containerd[1437]: 2025-05-13 00:09:32.984 [INFO][4017] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c69caa242909d2850bc8a227020e4d5c9a7fefe22f2b8af6f88d4b779938c57e" May 13 00:09:33.085597 containerd[1437]: 2025-05-13 00:09:33.071 [INFO][4026] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c69caa242909d2850bc8a227020e4d5c9a7fefe22f2b8af6f88d4b779938c57e" HandleID="k8s-pod-network.c69caa242909d2850bc8a227020e4d5c9a7fefe22f2b8af6f88d4b779938c57e" Workload="localhost-k8s-coredns--6f6b679f8f--ztnr2-eth0" May 13 00:09:33.085597 containerd[1437]: 2025-05-13 00:09:33.071 [INFO][4026] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:09:33.085597 containerd[1437]: 2025-05-13 00:09:33.071 [INFO][4026] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:09:33.085597 containerd[1437]: 2025-05-13 00:09:33.080 [WARNING][4026] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c69caa242909d2850bc8a227020e4d5c9a7fefe22f2b8af6f88d4b779938c57e" HandleID="k8s-pod-network.c69caa242909d2850bc8a227020e4d5c9a7fefe22f2b8af6f88d4b779938c57e" Workload="localhost-k8s-coredns--6f6b679f8f--ztnr2-eth0" May 13 00:09:33.085597 containerd[1437]: 2025-05-13 00:09:33.080 [INFO][4026] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c69caa242909d2850bc8a227020e4d5c9a7fefe22f2b8af6f88d4b779938c57e" HandleID="k8s-pod-network.c69caa242909d2850bc8a227020e4d5c9a7fefe22f2b8af6f88d4b779938c57e" Workload="localhost-k8s-coredns--6f6b679f8f--ztnr2-eth0" May 13 00:09:33.085597 containerd[1437]: 2025-05-13 00:09:33.081 [INFO][4026] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:09:33.085597 containerd[1437]: 2025-05-13 00:09:33.083 [INFO][4017] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c69caa242909d2850bc8a227020e4d5c9a7fefe22f2b8af6f88d4b779938c57e" May 13 00:09:33.085964 containerd[1437]: time="2025-05-13T00:09:33.085743426Z" level=info msg="TearDown network for sandbox \"c69caa242909d2850bc8a227020e4d5c9a7fefe22f2b8af6f88d4b779938c57e\" successfully" May 13 00:09:33.085964 containerd[1437]: time="2025-05-13T00:09:33.085770507Z" level=info msg="StopPodSandbox for \"c69caa242909d2850bc8a227020e4d5c9a7fefe22f2b8af6f88d4b779938c57e\" returns successfully" May 13 00:09:33.086790 kubelet[2452]: E0513 00:09:33.086312 2452 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:09:33.087170 containerd[1437]: time="2025-05-13T00:09:33.086947797Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-ztnr2,Uid:a77c4869-daeb-485e-a903-8ffb6cd275cb,Namespace:kube-system,Attempt:1,}" May 13 00:09:33.088397 systemd[1]: run-netns-cni\x2d9e262b60\x2d9b33\x2de65e\x2d1720\x2d9d831e1fd05c.mount: Deactivated successfully. May 13 00:09:33.240798 systemd-networkd[1374]: cali3413267b0eb: Link UP May 13 00:09:33.241032 systemd-networkd[1374]: cali3413267b0eb: Gained carrier May 13 00:09:33.255232 containerd[1437]: 2025-05-13 00:09:33.154 [INFO][4034] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--6f6b679f8f--ztnr2-eth0 coredns-6f6b679f8f- kube-system a77c4869-daeb-485e-a903-8ffb6cd275cb 781 0 2025-05-13 00:09:07 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-6f6b679f8f-ztnr2 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali3413267b0eb [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="7a0abb88800da24410e0a619a2696c8f5bc47d6523762a07112877a19afa2e1b" Namespace="kube-system" Pod="coredns-6f6b679f8f-ztnr2" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--ztnr2-" May 13 00:09:33.255232 containerd[1437]: 2025-05-13 00:09:33.155 [INFO][4034] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="7a0abb88800da24410e0a619a2696c8f5bc47d6523762a07112877a19afa2e1b" Namespace="kube-system" Pod="coredns-6f6b679f8f-ztnr2" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--ztnr2-eth0" May 13 00:09:33.255232 containerd[1437]: 2025-05-13 00:09:33.185 [INFO][4048] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7a0abb88800da24410e0a619a2696c8f5bc47d6523762a07112877a19afa2e1b" HandleID="k8s-pod-network.7a0abb88800da24410e0a619a2696c8f5bc47d6523762a07112877a19afa2e1b" Workload="localhost-k8s-coredns--6f6b679f8f--ztnr2-eth0" May 13 00:09:33.255232 containerd[1437]: 2025-05-13 00:09:33.197 [INFO][4048] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7a0abb88800da24410e0a619a2696c8f5bc47d6523762a07112877a19afa2e1b" HandleID="k8s-pod-network.7a0abb88800da24410e0a619a2696c8f5bc47d6523762a07112877a19afa2e1b" Workload="localhost-k8s-coredns--6f6b679f8f--ztnr2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d92d0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-6f6b679f8f-ztnr2", "timestamp":"2025-05-13 00:09:33.185213949 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 13 00:09:33.255232 containerd[1437]: 2025-05-13 00:09:33.197 [INFO][4048] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:09:33.255232 containerd[1437]: 2025-05-13 00:09:33.197 [INFO][4048] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:09:33.255232 containerd[1437]: 2025-05-13 00:09:33.197 [INFO][4048] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 13 00:09:33.255232 containerd[1437]: 2025-05-13 00:09:33.199 [INFO][4048] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.7a0abb88800da24410e0a619a2696c8f5bc47d6523762a07112877a19afa2e1b" host="localhost" May 13 00:09:33.255232 containerd[1437]: 2025-05-13 00:09:33.212 [INFO][4048] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 13 00:09:33.255232 containerd[1437]: 2025-05-13 00:09:33.217 [INFO][4048] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 13 00:09:33.255232 containerd[1437]: 2025-05-13 00:09:33.219 [INFO][4048] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 13 00:09:33.255232 containerd[1437]: 2025-05-13 00:09:33.221 [INFO][4048] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 13 00:09:33.255232 containerd[1437]: 2025-05-13 00:09:33.221 [INFO][4048] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.7a0abb88800da24410e0a619a2696c8f5bc47d6523762a07112877a19afa2e1b" host="localhost" May 13 00:09:33.255232 containerd[1437]: 2025-05-13 00:09:33.223 [INFO][4048] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.7a0abb88800da24410e0a619a2696c8f5bc47d6523762a07112877a19afa2e1b May 13 00:09:33.255232 containerd[1437]: 2025-05-13 00:09:33.227 [INFO][4048] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.7a0abb88800da24410e0a619a2696c8f5bc47d6523762a07112877a19afa2e1b" host="localhost" May 13 00:09:33.255232 containerd[1437]: 2025-05-13 00:09:33.231 [INFO][4048] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.7a0abb88800da24410e0a619a2696c8f5bc47d6523762a07112877a19afa2e1b" host="localhost" May 13 00:09:33.255232 containerd[1437]: 2025-05-13 00:09:33.231 [INFO][4048] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.7a0abb88800da24410e0a619a2696c8f5bc47d6523762a07112877a19afa2e1b" host="localhost" May 13 00:09:33.255232 containerd[1437]: 2025-05-13 00:09:33.231 [INFO][4048] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:09:33.255232 containerd[1437]: 2025-05-13 00:09:33.231 [INFO][4048] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="7a0abb88800da24410e0a619a2696c8f5bc47d6523762a07112877a19afa2e1b" HandleID="k8s-pod-network.7a0abb88800da24410e0a619a2696c8f5bc47d6523762a07112877a19afa2e1b" Workload="localhost-k8s-coredns--6f6b679f8f--ztnr2-eth0" May 13 00:09:33.256684 containerd[1437]: 2025-05-13 00:09:33.234 [INFO][4034] cni-plugin/k8s.go 386: Populated endpoint ContainerID="7a0abb88800da24410e0a619a2696c8f5bc47d6523762a07112877a19afa2e1b" Namespace="kube-system" Pod="coredns-6f6b679f8f-ztnr2" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--ztnr2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--ztnr2-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"a77c4869-daeb-485e-a903-8ffb6cd275cb", ResourceVersion:"781", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 9, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-6f6b679f8f-ztnr2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3413267b0eb", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:09:33.256684 containerd[1437]: 2025-05-13 00:09:33.234 [INFO][4034] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="7a0abb88800da24410e0a619a2696c8f5bc47d6523762a07112877a19afa2e1b" Namespace="kube-system" Pod="coredns-6f6b679f8f-ztnr2" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--ztnr2-eth0" May 13 00:09:33.256684 containerd[1437]: 2025-05-13 00:09:33.234 [INFO][4034] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3413267b0eb ContainerID="7a0abb88800da24410e0a619a2696c8f5bc47d6523762a07112877a19afa2e1b" Namespace="kube-system" Pod="coredns-6f6b679f8f-ztnr2" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--ztnr2-eth0" May 13 00:09:33.256684 containerd[1437]: 2025-05-13 00:09:33.241 [INFO][4034] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7a0abb88800da24410e0a619a2696c8f5bc47d6523762a07112877a19afa2e1b" Namespace="kube-system" Pod="coredns-6f6b679f8f-ztnr2" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--ztnr2-eth0" May 13 00:09:33.256684 containerd[1437]: 2025-05-13 00:09:33.241 [INFO][4034] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="7a0abb88800da24410e0a619a2696c8f5bc47d6523762a07112877a19afa2e1b" Namespace="kube-system" Pod="coredns-6f6b679f8f-ztnr2" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--ztnr2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--ztnr2-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"a77c4869-daeb-485e-a903-8ffb6cd275cb", ResourceVersion:"781", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 9, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7a0abb88800da24410e0a619a2696c8f5bc47d6523762a07112877a19afa2e1b", Pod:"coredns-6f6b679f8f-ztnr2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3413267b0eb", MAC:"12:50:a8:e2:01:b1", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:09:33.256684 containerd[1437]: 2025-05-13 00:09:33.252 [INFO][4034] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="7a0abb88800da24410e0a619a2696c8f5bc47d6523762a07112877a19afa2e1b" Namespace="kube-system" Pod="coredns-6f6b679f8f-ztnr2" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--ztnr2-eth0" May 13 00:09:33.281230 containerd[1437]: time="2025-05-13T00:09:33.281139482Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:09:33.281523 containerd[1437]: time="2025-05-13T00:09:33.281211603Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:09:33.281523 containerd[1437]: time="2025-05-13T00:09:33.281305563Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:09:33.287134 containerd[1437]: time="2025-05-13T00:09:33.286771850Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:09:33.318480 systemd[1]: Started cri-containerd-7a0abb88800da24410e0a619a2696c8f5bc47d6523762a07112877a19afa2e1b.scope - libcontainer container 7a0abb88800da24410e0a619a2696c8f5bc47d6523762a07112877a19afa2e1b. May 13 00:09:33.333366 systemd-resolved[1312]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 13 00:09:33.350699 containerd[1437]: time="2025-05-13T00:09:33.350656711Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-ztnr2,Uid:a77c4869-daeb-485e-a903-8ffb6cd275cb,Namespace:kube-system,Attempt:1,} returns sandbox id \"7a0abb88800da24410e0a619a2696c8f5bc47d6523762a07112877a19afa2e1b\"" May 13 00:09:33.351488 kubelet[2452]: E0513 00:09:33.351465 2452 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:09:33.353619 containerd[1437]: time="2025-05-13T00:09:33.353476695Z" level=info msg="CreateContainer within sandbox \"7a0abb88800da24410e0a619a2696c8f5bc47d6523762a07112877a19afa2e1b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 13 00:09:33.370326 containerd[1437]: time="2025-05-13T00:09:33.370173356Z" level=info msg="CreateContainer within sandbox \"7a0abb88800da24410e0a619a2696c8f5bc47d6523762a07112877a19afa2e1b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"108771fe613b235695b82f85912c5a7aeeb3bfc7099fc9585eb500c03fd6ac7e\"" May 13 00:09:33.370771 containerd[1437]: time="2025-05-13T00:09:33.370695481Z" level=info msg="StartContainer for \"108771fe613b235695b82f85912c5a7aeeb3bfc7099fc9585eb500c03fd6ac7e\"" May 13 00:09:33.392425 systemd[1]: Started cri-containerd-108771fe613b235695b82f85912c5a7aeeb3bfc7099fc9585eb500c03fd6ac7e.scope - libcontainer container 108771fe613b235695b82f85912c5a7aeeb3bfc7099fc9585eb500c03fd6ac7e. May 13 00:09:33.424091 containerd[1437]: time="2025-05-13T00:09:33.424038693Z" level=info msg="StartContainer for \"108771fe613b235695b82f85912c5a7aeeb3bfc7099fc9585eb500c03fd6ac7e\" returns successfully" May 13 00:09:34.050616 kubelet[2452]: E0513 00:09:34.045208 2452 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:09:34.067763 kubelet[2452]: I0513 00:09:34.067283 2452 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-ztnr2" podStartSLOduration=27.067247364 podStartE2EDuration="27.067247364s" podCreationTimestamp="2025-05-13 00:09:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 00:09:34.05574395 +0000 UTC m=+33.245118550" watchObservedRunningTime="2025-05-13 00:09:34.067247364 +0000 UTC m=+33.256621964" May 13 00:09:34.251433 systemd-networkd[1374]: cali3413267b0eb: Gained IPv6LL May 13 00:09:34.895976 containerd[1437]: time="2025-05-13T00:09:34.895926438Z" level=info msg="StopPodSandbox for \"79bdb8a57c6bef7a8f2d085c160d7ec740cc530f2f3fc2579157d5a6ca0850d3\"" May 13 00:09:34.896874 containerd[1437]: time="2025-05-13T00:09:34.896581323Z" level=info msg="StopPodSandbox for \"57adbb0a67d29b0bf4ad1796865ac693e2cd7f97425c4bd1488202a63caa1a67\"" May 13 00:09:34.896874 containerd[1437]: time="2025-05-13T00:09:34.896631964Z" level=info msg="StopPodSandbox for \"4c4c90edb0ca19b850c99e21650b813f41c7c951092eac6de249c3c6cbbe200d\"" May 13 00:09:35.001549 containerd[1437]: 2025-05-13 00:09:34.957 [INFO][4211] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="57adbb0a67d29b0bf4ad1796865ac693e2cd7f97425c4bd1488202a63caa1a67" May 13 00:09:35.001549 containerd[1437]: 2025-05-13 00:09:34.958 [INFO][4211] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="57adbb0a67d29b0bf4ad1796865ac693e2cd7f97425c4bd1488202a63caa1a67" iface="eth0" netns="/var/run/netns/cni-18dc537a-514f-f45f-1337-ce7a12ab1e80" May 13 00:09:35.001549 containerd[1437]: 2025-05-13 00:09:34.958 [INFO][4211] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="57adbb0a67d29b0bf4ad1796865ac693e2cd7f97425c4bd1488202a63caa1a67" iface="eth0" netns="/var/run/netns/cni-18dc537a-514f-f45f-1337-ce7a12ab1e80" May 13 00:09:35.001549 containerd[1437]: 2025-05-13 00:09:34.959 [INFO][4211] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="57adbb0a67d29b0bf4ad1796865ac693e2cd7f97425c4bd1488202a63caa1a67" iface="eth0" netns="/var/run/netns/cni-18dc537a-514f-f45f-1337-ce7a12ab1e80" May 13 00:09:35.001549 containerd[1437]: 2025-05-13 00:09:34.959 [INFO][4211] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="57adbb0a67d29b0bf4ad1796865ac693e2cd7f97425c4bd1488202a63caa1a67" May 13 00:09:35.001549 containerd[1437]: 2025-05-13 00:09:34.959 [INFO][4211] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="57adbb0a67d29b0bf4ad1796865ac693e2cd7f97425c4bd1488202a63caa1a67" May 13 00:09:35.001549 containerd[1437]: 2025-05-13 00:09:34.989 [INFO][4231] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="57adbb0a67d29b0bf4ad1796865ac693e2cd7f97425c4bd1488202a63caa1a67" HandleID="k8s-pod-network.57adbb0a67d29b0bf4ad1796865ac693e2cd7f97425c4bd1488202a63caa1a67" Workload="localhost-k8s-csi--node--driver--bztmm-eth0" May 13 00:09:35.001549 containerd[1437]: 2025-05-13 00:09:34.989 [INFO][4231] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:09:35.001549 containerd[1437]: 2025-05-13 00:09:34.989 [INFO][4231] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:09:35.001549 containerd[1437]: 2025-05-13 00:09:34.997 [WARNING][4231] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="57adbb0a67d29b0bf4ad1796865ac693e2cd7f97425c4bd1488202a63caa1a67" HandleID="k8s-pod-network.57adbb0a67d29b0bf4ad1796865ac693e2cd7f97425c4bd1488202a63caa1a67" Workload="localhost-k8s-csi--node--driver--bztmm-eth0" May 13 00:09:35.001549 containerd[1437]: 2025-05-13 00:09:34.997 [INFO][4231] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="57adbb0a67d29b0bf4ad1796865ac693e2cd7f97425c4bd1488202a63caa1a67" HandleID="k8s-pod-network.57adbb0a67d29b0bf4ad1796865ac693e2cd7f97425c4bd1488202a63caa1a67" Workload="localhost-k8s-csi--node--driver--bztmm-eth0" May 13 00:09:35.001549 containerd[1437]: 2025-05-13 00:09:34.998 [INFO][4231] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:09:35.001549 containerd[1437]: 2025-05-13 00:09:35.000 [INFO][4211] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="57adbb0a67d29b0bf4ad1796865ac693e2cd7f97425c4bd1488202a63caa1a67" May 13 00:09:35.002594 containerd[1437]: time="2025-05-13T00:09:35.002531071Z" level=info msg="TearDown network for sandbox \"57adbb0a67d29b0bf4ad1796865ac693e2cd7f97425c4bd1488202a63caa1a67\" successfully" May 13 00:09:35.002594 containerd[1437]: time="2025-05-13T00:09:35.002569191Z" level=info msg="StopPodSandbox for \"57adbb0a67d29b0bf4ad1796865ac693e2cd7f97425c4bd1488202a63caa1a67\" returns successfully" May 13 00:09:35.004405 containerd[1437]: time="2025-05-13T00:09:35.004015043Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bztmm,Uid:f17de286-e9a7-4976-97e8-a35d9794c721,Namespace:calico-system,Attempt:1,}" May 13 00:09:35.004812 systemd[1]: run-netns-cni\x2d18dc537a\x2d514f\x2df45f\x2d1337\x2dce7a12ab1e80.mount: Deactivated successfully. May 13 00:09:35.013163 containerd[1437]: 2025-05-13 00:09:34.959 [INFO][4195] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="79bdb8a57c6bef7a8f2d085c160d7ec740cc530f2f3fc2579157d5a6ca0850d3" May 13 00:09:35.013163 containerd[1437]: 2025-05-13 00:09:34.959 [INFO][4195] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="79bdb8a57c6bef7a8f2d085c160d7ec740cc530f2f3fc2579157d5a6ca0850d3" iface="eth0" netns="/var/run/netns/cni-d37b308d-b55e-e987-2f23-a284f5abd7d6" May 13 00:09:35.013163 containerd[1437]: 2025-05-13 00:09:34.959 [INFO][4195] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="79bdb8a57c6bef7a8f2d085c160d7ec740cc530f2f3fc2579157d5a6ca0850d3" iface="eth0" netns="/var/run/netns/cni-d37b308d-b55e-e987-2f23-a284f5abd7d6" May 13 00:09:35.013163 containerd[1437]: 2025-05-13 00:09:34.959 [INFO][4195] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="79bdb8a57c6bef7a8f2d085c160d7ec740cc530f2f3fc2579157d5a6ca0850d3" iface="eth0" netns="/var/run/netns/cni-d37b308d-b55e-e987-2f23-a284f5abd7d6" May 13 00:09:35.013163 containerd[1437]: 2025-05-13 00:09:34.959 [INFO][4195] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="79bdb8a57c6bef7a8f2d085c160d7ec740cc530f2f3fc2579157d5a6ca0850d3" May 13 00:09:35.013163 containerd[1437]: 2025-05-13 00:09:34.959 [INFO][4195] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="79bdb8a57c6bef7a8f2d085c160d7ec740cc530f2f3fc2579157d5a6ca0850d3" May 13 00:09:35.013163 containerd[1437]: 2025-05-13 00:09:34.989 [INFO][4233] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="79bdb8a57c6bef7a8f2d085c160d7ec740cc530f2f3fc2579157d5a6ca0850d3" HandleID="k8s-pod-network.79bdb8a57c6bef7a8f2d085c160d7ec740cc530f2f3fc2579157d5a6ca0850d3" Workload="localhost-k8s-calico--kube--controllers--7d84469565--2x9zl-eth0" May 13 00:09:35.013163 containerd[1437]: 2025-05-13 00:09:34.989 [INFO][4233] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:09:35.013163 containerd[1437]: 2025-05-13 00:09:34.998 [INFO][4233] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:09:35.013163 containerd[1437]: 2025-05-13 00:09:35.007 [WARNING][4233] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="79bdb8a57c6bef7a8f2d085c160d7ec740cc530f2f3fc2579157d5a6ca0850d3" HandleID="k8s-pod-network.79bdb8a57c6bef7a8f2d085c160d7ec740cc530f2f3fc2579157d5a6ca0850d3" Workload="localhost-k8s-calico--kube--controllers--7d84469565--2x9zl-eth0" May 13 00:09:35.013163 containerd[1437]: 2025-05-13 00:09:35.007 [INFO][4233] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="79bdb8a57c6bef7a8f2d085c160d7ec740cc530f2f3fc2579157d5a6ca0850d3" HandleID="k8s-pod-network.79bdb8a57c6bef7a8f2d085c160d7ec740cc530f2f3fc2579157d5a6ca0850d3" Workload="localhost-k8s-calico--kube--controllers--7d84469565--2x9zl-eth0" May 13 00:09:35.013163 containerd[1437]: 2025-05-13 00:09:35.009 [INFO][4233] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:09:35.013163 containerd[1437]: 2025-05-13 00:09:35.011 [INFO][4195] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="79bdb8a57c6bef7a8f2d085c160d7ec740cc530f2f3fc2579157d5a6ca0850d3" May 13 00:09:35.015070 systemd[1]: run-netns-cni\x2dd37b308d\x2db55e\x2de987\x2d2f23\x2da284f5abd7d6.mount: Deactivated successfully. May 13 00:09:35.016077 containerd[1437]: time="2025-05-13T00:09:35.016035018Z" level=info msg="TearDown network for sandbox \"79bdb8a57c6bef7a8f2d085c160d7ec740cc530f2f3fc2579157d5a6ca0850d3\" successfully" May 13 00:09:35.016077 containerd[1437]: time="2025-05-13T00:09:35.016070139Z" level=info msg="StopPodSandbox for \"79bdb8a57c6bef7a8f2d085c160d7ec740cc530f2f3fc2579157d5a6ca0850d3\" returns successfully" May 13 00:09:35.017075 containerd[1437]: time="2025-05-13T00:09:35.016971586Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7d84469565-2x9zl,Uid:9eb635b5-09fc-4daa-ae36-54b042b695f8,Namespace:calico-system,Attempt:1,}" May 13 00:09:35.026488 containerd[1437]: 2025-05-13 00:09:34.963 [INFO][4210] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="4c4c90edb0ca19b850c99e21650b813f41c7c951092eac6de249c3c6cbbe200d" May 13 00:09:35.026488 containerd[1437]: 2025-05-13 00:09:34.964 [INFO][4210] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4c4c90edb0ca19b850c99e21650b813f41c7c951092eac6de249c3c6cbbe200d" iface="eth0" netns="/var/run/netns/cni-f3cfe393-acf3-179a-602e-37a6c42ef41f" May 13 00:09:35.026488 containerd[1437]: 2025-05-13 00:09:34.964 [INFO][4210] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4c4c90edb0ca19b850c99e21650b813f41c7c951092eac6de249c3c6cbbe200d" iface="eth0" netns="/var/run/netns/cni-f3cfe393-acf3-179a-602e-37a6c42ef41f" May 13 00:09:35.026488 containerd[1437]: 2025-05-13 00:09:34.964 [INFO][4210] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4c4c90edb0ca19b850c99e21650b813f41c7c951092eac6de249c3c6cbbe200d" iface="eth0" netns="/var/run/netns/cni-f3cfe393-acf3-179a-602e-37a6c42ef41f" May 13 00:09:35.026488 containerd[1437]: 2025-05-13 00:09:34.964 [INFO][4210] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="4c4c90edb0ca19b850c99e21650b813f41c7c951092eac6de249c3c6cbbe200d" May 13 00:09:35.026488 containerd[1437]: 2025-05-13 00:09:34.964 [INFO][4210] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4c4c90edb0ca19b850c99e21650b813f41c7c951092eac6de249c3c6cbbe200d" May 13 00:09:35.026488 containerd[1437]: 2025-05-13 00:09:34.994 [INFO][4243] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4c4c90edb0ca19b850c99e21650b813f41c7c951092eac6de249c3c6cbbe200d" HandleID="k8s-pod-network.4c4c90edb0ca19b850c99e21650b813f41c7c951092eac6de249c3c6cbbe200d" Workload="localhost-k8s-calico--apiserver--78445c69b9--wbfwl-eth0" May 13 00:09:35.026488 containerd[1437]: 2025-05-13 00:09:34.997 [INFO][4243] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:09:35.026488 containerd[1437]: 2025-05-13 00:09:35.009 [INFO][4243] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:09:35.026488 containerd[1437]: 2025-05-13 00:09:35.017 [WARNING][4243] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4c4c90edb0ca19b850c99e21650b813f41c7c951092eac6de249c3c6cbbe200d" HandleID="k8s-pod-network.4c4c90edb0ca19b850c99e21650b813f41c7c951092eac6de249c3c6cbbe200d" Workload="localhost-k8s-calico--apiserver--78445c69b9--wbfwl-eth0" May 13 00:09:35.026488 containerd[1437]: 2025-05-13 00:09:35.017 [INFO][4243] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4c4c90edb0ca19b850c99e21650b813f41c7c951092eac6de249c3c6cbbe200d" HandleID="k8s-pod-network.4c4c90edb0ca19b850c99e21650b813f41c7c951092eac6de249c3c6cbbe200d" Workload="localhost-k8s-calico--apiserver--78445c69b9--wbfwl-eth0" May 13 00:09:35.026488 containerd[1437]: 2025-05-13 00:09:35.021 [INFO][4243] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:09:35.026488 containerd[1437]: 2025-05-13 00:09:35.024 [INFO][4210] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="4c4c90edb0ca19b850c99e21650b813f41c7c951092eac6de249c3c6cbbe200d" May 13 00:09:35.027019 containerd[1437]: time="2025-05-13T00:09:35.026629582Z" level=info msg="TearDown network for sandbox \"4c4c90edb0ca19b850c99e21650b813f41c7c951092eac6de249c3c6cbbe200d\" successfully" May 13 00:09:35.027019 containerd[1437]: time="2025-05-13T00:09:35.026656823Z" level=info msg="StopPodSandbox for \"4c4c90edb0ca19b850c99e21650b813f41c7c951092eac6de249c3c6cbbe200d\" returns successfully" May 13 00:09:35.027932 containerd[1437]: time="2025-05-13T00:09:35.027707951Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-78445c69b9-wbfwl,Uid:abf96a25-9422-4227-9da0-78c6d9df4e1e,Namespace:calico-apiserver,Attempt:1,}" May 13 00:09:35.046519 kubelet[2452]: E0513 00:09:35.046485 2452 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:09:35.093307 systemd[1]: run-netns-cni\x2df3cfe393\x2dacf3\x2d179a\x2d602e\x2d37a6c42ef41f.mount: Deactivated successfully. May 13 00:09:35.159709 systemd-networkd[1374]: cali66ff05710cc: Link UP May 13 00:09:35.159916 systemd-networkd[1374]: cali66ff05710cc: Gained carrier May 13 00:09:35.173615 containerd[1437]: 2025-05-13 00:09:35.064 [INFO][4255] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--bztmm-eth0 csi-node-driver- calico-system f17de286-e9a7-4976-97e8-a35d9794c721 815 0 2025-05-13 00:09:13 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:5bcd8f69 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-bztmm eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali66ff05710cc [] []}} ContainerID="5604677f3318435141028a4d59e601d9b189ad2734c64f287cf2535078821367" Namespace="calico-system" Pod="csi-node-driver-bztmm" WorkloadEndpoint="localhost-k8s-csi--node--driver--bztmm-" May 13 00:09:35.173615 containerd[1437]: 2025-05-13 00:09:35.064 [INFO][4255] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="5604677f3318435141028a4d59e601d9b189ad2734c64f287cf2535078821367" Namespace="calico-system" Pod="csi-node-driver-bztmm" WorkloadEndpoint="localhost-k8s-csi--node--driver--bztmm-eth0" May 13 00:09:35.173615 containerd[1437]: 2025-05-13 00:09:35.107 [INFO][4298] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5604677f3318435141028a4d59e601d9b189ad2734c64f287cf2535078821367" HandleID="k8s-pod-network.5604677f3318435141028a4d59e601d9b189ad2734c64f287cf2535078821367" Workload="localhost-k8s-csi--node--driver--bztmm-eth0" May 13 00:09:35.173615 containerd[1437]: 2025-05-13 00:09:35.121 [INFO][4298] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5604677f3318435141028a4d59e601d9b189ad2734c64f287cf2535078821367" HandleID="k8s-pod-network.5604677f3318435141028a4d59e601d9b189ad2734c64f287cf2535078821367" Workload="localhost-k8s-csi--node--driver--bztmm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000305e80), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-bztmm", "timestamp":"2025-05-13 00:09:35.107358343 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 13 00:09:35.173615 containerd[1437]: 2025-05-13 00:09:35.121 [INFO][4298] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:09:35.173615 containerd[1437]: 2025-05-13 00:09:35.121 [INFO][4298] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:09:35.173615 containerd[1437]: 2025-05-13 00:09:35.121 [INFO][4298] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 13 00:09:35.173615 containerd[1437]: 2025-05-13 00:09:35.123 [INFO][4298] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.5604677f3318435141028a4d59e601d9b189ad2734c64f287cf2535078821367" host="localhost" May 13 00:09:35.173615 containerd[1437]: 2025-05-13 00:09:35.129 [INFO][4298] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 13 00:09:35.173615 containerd[1437]: 2025-05-13 00:09:35.138 [INFO][4298] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 13 00:09:35.173615 containerd[1437]: 2025-05-13 00:09:35.140 [INFO][4298] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 13 00:09:35.173615 containerd[1437]: 2025-05-13 00:09:35.143 [INFO][4298] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 13 00:09:35.173615 containerd[1437]: 2025-05-13 00:09:35.143 [INFO][4298] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.5604677f3318435141028a4d59e601d9b189ad2734c64f287cf2535078821367" host="localhost" May 13 00:09:35.173615 containerd[1437]: 2025-05-13 00:09:35.145 [INFO][4298] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.5604677f3318435141028a4d59e601d9b189ad2734c64f287cf2535078821367 May 13 00:09:35.173615 containerd[1437]: 2025-05-13 00:09:35.149 [INFO][4298] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.5604677f3318435141028a4d59e601d9b189ad2734c64f287cf2535078821367" host="localhost" May 13 00:09:35.173615 containerd[1437]: 2025-05-13 00:09:35.154 [INFO][4298] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.5604677f3318435141028a4d59e601d9b189ad2734c64f287cf2535078821367" host="localhost" May 13 00:09:35.173615 containerd[1437]: 2025-05-13 00:09:35.154 [INFO][4298] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.5604677f3318435141028a4d59e601d9b189ad2734c64f287cf2535078821367" host="localhost" May 13 00:09:35.173615 containerd[1437]: 2025-05-13 00:09:35.154 [INFO][4298] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:09:35.173615 containerd[1437]: 2025-05-13 00:09:35.154 [INFO][4298] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="5604677f3318435141028a4d59e601d9b189ad2734c64f287cf2535078821367" HandleID="k8s-pod-network.5604677f3318435141028a4d59e601d9b189ad2734c64f287cf2535078821367" Workload="localhost-k8s-csi--node--driver--bztmm-eth0" May 13 00:09:35.174154 containerd[1437]: 2025-05-13 00:09:35.156 [INFO][4255] cni-plugin/k8s.go 386: Populated endpoint ContainerID="5604677f3318435141028a4d59e601d9b189ad2734c64f287cf2535078821367" Namespace="calico-system" Pod="csi-node-driver-bztmm" WorkloadEndpoint="localhost-k8s-csi--node--driver--bztmm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--bztmm-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f17de286-e9a7-4976-97e8-a35d9794c721", ResourceVersion:"815", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 9, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5bcd8f69", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-bztmm", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali66ff05710cc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:09:35.174154 containerd[1437]: 2025-05-13 00:09:35.156 [INFO][4255] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="5604677f3318435141028a4d59e601d9b189ad2734c64f287cf2535078821367" Namespace="calico-system" Pod="csi-node-driver-bztmm" WorkloadEndpoint="localhost-k8s-csi--node--driver--bztmm-eth0" May 13 00:09:35.174154 containerd[1437]: 2025-05-13 00:09:35.156 [INFO][4255] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali66ff05710cc ContainerID="5604677f3318435141028a4d59e601d9b189ad2734c64f287cf2535078821367" Namespace="calico-system" Pod="csi-node-driver-bztmm" WorkloadEndpoint="localhost-k8s-csi--node--driver--bztmm-eth0" May 13 00:09:35.174154 containerd[1437]: 2025-05-13 00:09:35.160 [INFO][4255] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5604677f3318435141028a4d59e601d9b189ad2734c64f287cf2535078821367" Namespace="calico-system" Pod="csi-node-driver-bztmm" WorkloadEndpoint="localhost-k8s-csi--node--driver--bztmm-eth0" May 13 00:09:35.174154 containerd[1437]: 2025-05-13 00:09:35.162 [INFO][4255] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="5604677f3318435141028a4d59e601d9b189ad2734c64f287cf2535078821367" Namespace="calico-system" Pod="csi-node-driver-bztmm" WorkloadEndpoint="localhost-k8s-csi--node--driver--bztmm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--bztmm-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f17de286-e9a7-4976-97e8-a35d9794c721", ResourceVersion:"815", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 9, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5bcd8f69", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5604677f3318435141028a4d59e601d9b189ad2734c64f287cf2535078821367", Pod:"csi-node-driver-bztmm", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali66ff05710cc", MAC:"0a:58:98:e9:54:f9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:09:35.174154 containerd[1437]: 2025-05-13 00:09:35.171 [INFO][4255] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="5604677f3318435141028a4d59e601d9b189ad2734c64f287cf2535078821367" Namespace="calico-system" Pod="csi-node-driver-bztmm" WorkloadEndpoint="localhost-k8s-csi--node--driver--bztmm-eth0" May 13 00:09:35.190553 containerd[1437]: time="2025-05-13T00:09:35.190443603Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:09:35.190553 containerd[1437]: time="2025-05-13T00:09:35.190503324Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:09:35.190553 containerd[1437]: time="2025-05-13T00:09:35.190514684Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:09:35.190845 containerd[1437]: time="2025-05-13T00:09:35.190598204Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:09:35.214433 systemd[1]: Started cri-containerd-5604677f3318435141028a4d59e601d9b189ad2734c64f287cf2535078821367.scope - libcontainer container 5604677f3318435141028a4d59e601d9b189ad2734c64f287cf2535078821367. May 13 00:09:35.226316 systemd-resolved[1312]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 13 00:09:35.238885 containerd[1437]: time="2025-05-13T00:09:35.238826667Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bztmm,Uid:f17de286-e9a7-4976-97e8-a35d9794c721,Namespace:calico-system,Attempt:1,} returns sandbox id \"5604677f3318435141028a4d59e601d9b189ad2734c64f287cf2535078821367\"" May 13 00:09:35.241469 containerd[1437]: time="2025-05-13T00:09:35.241442648Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\"" May 13 00:09:35.258956 systemd-networkd[1374]: calic0cf0c8d8df: Link UP May 13 00:09:35.259520 systemd-networkd[1374]: calic0cf0c8d8df: Gained carrier May 13 00:09:35.273973 containerd[1437]: 2025-05-13 00:09:35.096 [INFO][4281] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--78445c69b9--wbfwl-eth0 calico-apiserver-78445c69b9- calico-apiserver abf96a25-9422-4227-9da0-78c6d9df4e1e 817 0 2025-05-13 00:09:13 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:78445c69b9 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-78445c69b9-wbfwl eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calic0cf0c8d8df [] []}} ContainerID="d67a38149d30d385f86a8f1a8138837fae957162ebd103a74a2d08b870c759f8" Namespace="calico-apiserver" Pod="calico-apiserver-78445c69b9-wbfwl" WorkloadEndpoint="localhost-k8s-calico--apiserver--78445c69b9--wbfwl-" May 13 00:09:35.273973 containerd[1437]: 2025-05-13 00:09:35.096 [INFO][4281] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="d67a38149d30d385f86a8f1a8138837fae957162ebd103a74a2d08b870c759f8" Namespace="calico-apiserver" Pod="calico-apiserver-78445c69b9-wbfwl" WorkloadEndpoint="localhost-k8s-calico--apiserver--78445c69b9--wbfwl-eth0" May 13 00:09:35.273973 containerd[1437]: 2025-05-13 00:09:35.131 [INFO][4311] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d67a38149d30d385f86a8f1a8138837fae957162ebd103a74a2d08b870c759f8" HandleID="k8s-pod-network.d67a38149d30d385f86a8f1a8138837fae957162ebd103a74a2d08b870c759f8" Workload="localhost-k8s-calico--apiserver--78445c69b9--wbfwl-eth0" May 13 00:09:35.273973 containerd[1437]: 2025-05-13 00:09:35.142 [INFO][4311] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d67a38149d30d385f86a8f1a8138837fae957162ebd103a74a2d08b870c759f8" HandleID="k8s-pod-network.d67a38149d30d385f86a8f1a8138837fae957162ebd103a74a2d08b870c759f8" Workload="localhost-k8s-calico--apiserver--78445c69b9--wbfwl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002ff7b0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-78445c69b9-wbfwl", "timestamp":"2025-05-13 00:09:35.131109172 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 13 00:09:35.273973 containerd[1437]: 2025-05-13 00:09:35.142 [INFO][4311] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:09:35.273973 containerd[1437]: 2025-05-13 00:09:35.155 [INFO][4311] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:09:35.273973 containerd[1437]: 2025-05-13 00:09:35.155 [INFO][4311] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 13 00:09:35.273973 containerd[1437]: 2025-05-13 00:09:35.224 [INFO][4311] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.d67a38149d30d385f86a8f1a8138837fae957162ebd103a74a2d08b870c759f8" host="localhost" May 13 00:09:35.273973 containerd[1437]: 2025-05-13 00:09:35.231 [INFO][4311] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 13 00:09:35.273973 containerd[1437]: 2025-05-13 00:09:35.236 [INFO][4311] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 13 00:09:35.273973 containerd[1437]: 2025-05-13 00:09:35.240 [INFO][4311] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 13 00:09:35.273973 containerd[1437]: 2025-05-13 00:09:35.243 [INFO][4311] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 13 00:09:35.273973 containerd[1437]: 2025-05-13 00:09:35.243 [INFO][4311] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d67a38149d30d385f86a8f1a8138837fae957162ebd103a74a2d08b870c759f8" host="localhost" May 13 00:09:35.273973 containerd[1437]: 2025-05-13 00:09:35.245 [INFO][4311] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.d67a38149d30d385f86a8f1a8138837fae957162ebd103a74a2d08b870c759f8 May 13 00:09:35.273973 containerd[1437]: 2025-05-13 00:09:35.248 [INFO][4311] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d67a38149d30d385f86a8f1a8138837fae957162ebd103a74a2d08b870c759f8" host="localhost" May 13 00:09:35.273973 containerd[1437]: 2025-05-13 00:09:35.254 [INFO][4311] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.d67a38149d30d385f86a8f1a8138837fae957162ebd103a74a2d08b870c759f8" host="localhost" May 13 00:09:35.273973 containerd[1437]: 2025-05-13 00:09:35.254 [INFO][4311] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.d67a38149d30d385f86a8f1a8138837fae957162ebd103a74a2d08b870c759f8" host="localhost" May 13 00:09:35.273973 containerd[1437]: 2025-05-13 00:09:35.254 [INFO][4311] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:09:35.273973 containerd[1437]: 2025-05-13 00:09:35.254 [INFO][4311] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="d67a38149d30d385f86a8f1a8138837fae957162ebd103a74a2d08b870c759f8" HandleID="k8s-pod-network.d67a38149d30d385f86a8f1a8138837fae957162ebd103a74a2d08b870c759f8" Workload="localhost-k8s-calico--apiserver--78445c69b9--wbfwl-eth0" May 13 00:09:35.274504 containerd[1437]: 2025-05-13 00:09:35.256 [INFO][4281] cni-plugin/k8s.go 386: Populated endpoint ContainerID="d67a38149d30d385f86a8f1a8138837fae957162ebd103a74a2d08b870c759f8" Namespace="calico-apiserver" Pod="calico-apiserver-78445c69b9-wbfwl" WorkloadEndpoint="localhost-k8s-calico--apiserver--78445c69b9--wbfwl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--78445c69b9--wbfwl-eth0", GenerateName:"calico-apiserver-78445c69b9-", Namespace:"calico-apiserver", SelfLink:"", UID:"abf96a25-9422-4227-9da0-78c6d9df4e1e", ResourceVersion:"817", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 9, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"78445c69b9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-78445c69b9-wbfwl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic0cf0c8d8df", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:09:35.274504 containerd[1437]: 2025-05-13 00:09:35.256 [INFO][4281] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="d67a38149d30d385f86a8f1a8138837fae957162ebd103a74a2d08b870c759f8" Namespace="calico-apiserver" Pod="calico-apiserver-78445c69b9-wbfwl" WorkloadEndpoint="localhost-k8s-calico--apiserver--78445c69b9--wbfwl-eth0" May 13 00:09:35.274504 containerd[1437]: 2025-05-13 00:09:35.256 [INFO][4281] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic0cf0c8d8df ContainerID="d67a38149d30d385f86a8f1a8138837fae957162ebd103a74a2d08b870c759f8" Namespace="calico-apiserver" Pod="calico-apiserver-78445c69b9-wbfwl" WorkloadEndpoint="localhost-k8s-calico--apiserver--78445c69b9--wbfwl-eth0" May 13 00:09:35.274504 containerd[1437]: 2025-05-13 00:09:35.259 [INFO][4281] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d67a38149d30d385f86a8f1a8138837fae957162ebd103a74a2d08b870c759f8" Namespace="calico-apiserver" Pod="calico-apiserver-78445c69b9-wbfwl" WorkloadEndpoint="localhost-k8s-calico--apiserver--78445c69b9--wbfwl-eth0" May 13 00:09:35.274504 containerd[1437]: 2025-05-13 00:09:35.261 [INFO][4281] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="d67a38149d30d385f86a8f1a8138837fae957162ebd103a74a2d08b870c759f8" Namespace="calico-apiserver" Pod="calico-apiserver-78445c69b9-wbfwl" WorkloadEndpoint="localhost-k8s-calico--apiserver--78445c69b9--wbfwl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--78445c69b9--wbfwl-eth0", GenerateName:"calico-apiserver-78445c69b9-", Namespace:"calico-apiserver", SelfLink:"", UID:"abf96a25-9422-4227-9da0-78c6d9df4e1e", ResourceVersion:"817", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 9, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"78445c69b9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d67a38149d30d385f86a8f1a8138837fae957162ebd103a74a2d08b870c759f8", Pod:"calico-apiserver-78445c69b9-wbfwl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic0cf0c8d8df", MAC:"a6:77:3e:3c:0f:60", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:09:35.274504 containerd[1437]: 2025-05-13 00:09:35.271 [INFO][4281] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="d67a38149d30d385f86a8f1a8138837fae957162ebd103a74a2d08b870c759f8" Namespace="calico-apiserver" Pod="calico-apiserver-78445c69b9-wbfwl" WorkloadEndpoint="localhost-k8s-calico--apiserver--78445c69b9--wbfwl-eth0" May 13 00:09:35.302553 containerd[1437]: time="2025-05-13T00:09:35.302320371Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:09:35.302553 containerd[1437]: time="2025-05-13T00:09:35.302380572Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:09:35.302553 containerd[1437]: time="2025-05-13T00:09:35.302391572Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:09:35.302553 containerd[1437]: time="2025-05-13T00:09:35.302463813Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:09:35.322437 systemd[1]: Started cri-containerd-d67a38149d30d385f86a8f1a8138837fae957162ebd103a74a2d08b870c759f8.scope - libcontainer container d67a38149d30d385f86a8f1a8138837fae957162ebd103a74a2d08b870c759f8. May 13 00:09:35.337435 systemd-resolved[1312]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 13 00:09:35.362398 containerd[1437]: time="2025-05-13T00:09:35.362226087Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-78445c69b9-wbfwl,Uid:abf96a25-9422-4227-9da0-78c6d9df4e1e,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"d67a38149d30d385f86a8f1a8138837fae957162ebd103a74a2d08b870c759f8\"" May 13 00:09:35.365664 systemd-networkd[1374]: calib085c1c372e: Link UP May 13 00:09:35.366152 systemd-networkd[1374]: calib085c1c372e: Gained carrier May 13 00:09:35.380924 containerd[1437]: 2025-05-13 00:09:35.095 [INFO][4268] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--7d84469565--2x9zl-eth0 calico-kube-controllers-7d84469565- calico-system 9eb635b5-09fc-4daa-ae36-54b042b695f8 816 0 2025-05-13 00:09:13 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7d84469565 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-7d84469565-2x9zl eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calib085c1c372e [] []}} ContainerID="d74e32b54893723dcd4feacbcff5d47af00ff6d7ef5c381d2a6ef61d31863415" Namespace="calico-system" Pod="calico-kube-controllers-7d84469565-2x9zl" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7d84469565--2x9zl-" May 13 00:09:35.380924 containerd[1437]: 2025-05-13 00:09:35.095 [INFO][4268] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="d74e32b54893723dcd4feacbcff5d47af00ff6d7ef5c381d2a6ef61d31863415" Namespace="calico-system" Pod="calico-kube-controllers-7d84469565-2x9zl" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7d84469565--2x9zl-eth0" May 13 00:09:35.380924 containerd[1437]: 2025-05-13 00:09:35.136 [INFO][4309] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d74e32b54893723dcd4feacbcff5d47af00ff6d7ef5c381d2a6ef61d31863415" HandleID="k8s-pod-network.d74e32b54893723dcd4feacbcff5d47af00ff6d7ef5c381d2a6ef61d31863415" Workload="localhost-k8s-calico--kube--controllers--7d84469565--2x9zl-eth0" May 13 00:09:35.380924 containerd[1437]: 2025-05-13 00:09:35.217 [INFO][4309] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d74e32b54893723dcd4feacbcff5d47af00ff6d7ef5c381d2a6ef61d31863415" HandleID="k8s-pod-network.d74e32b54893723dcd4feacbcff5d47af00ff6d7ef5c381d2a6ef61d31863415" Workload="localhost-k8s-calico--kube--controllers--7d84469565--2x9zl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400031faa0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-7d84469565-2x9zl", "timestamp":"2025-05-13 00:09:35.136300373 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 13 00:09:35.380924 containerd[1437]: 2025-05-13 00:09:35.217 [INFO][4309] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:09:35.380924 containerd[1437]: 2025-05-13 00:09:35.254 [INFO][4309] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:09:35.380924 containerd[1437]: 2025-05-13 00:09:35.254 [INFO][4309] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 13 00:09:35.380924 containerd[1437]: 2025-05-13 00:09:35.326 [INFO][4309] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.d74e32b54893723dcd4feacbcff5d47af00ff6d7ef5c381d2a6ef61d31863415" host="localhost" May 13 00:09:35.380924 containerd[1437]: 2025-05-13 00:09:35.333 [INFO][4309] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 13 00:09:35.380924 containerd[1437]: 2025-05-13 00:09:35.338 [INFO][4309] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 13 00:09:35.380924 containerd[1437]: 2025-05-13 00:09:35.340 [INFO][4309] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 13 00:09:35.380924 containerd[1437]: 2025-05-13 00:09:35.342 [INFO][4309] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 13 00:09:35.380924 containerd[1437]: 2025-05-13 00:09:35.342 [INFO][4309] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d74e32b54893723dcd4feacbcff5d47af00ff6d7ef5c381d2a6ef61d31863415" host="localhost" May 13 00:09:35.380924 containerd[1437]: 2025-05-13 00:09:35.344 [INFO][4309] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.d74e32b54893723dcd4feacbcff5d47af00ff6d7ef5c381d2a6ef61d31863415 May 13 00:09:35.380924 containerd[1437]: 2025-05-13 00:09:35.348 [INFO][4309] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d74e32b54893723dcd4feacbcff5d47af00ff6d7ef5c381d2a6ef61d31863415" host="localhost" May 13 00:09:35.380924 containerd[1437]: 2025-05-13 00:09:35.355 [INFO][4309] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.d74e32b54893723dcd4feacbcff5d47af00ff6d7ef5c381d2a6ef61d31863415" host="localhost" May 13 00:09:35.380924 containerd[1437]: 2025-05-13 00:09:35.355 [INFO][4309] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.d74e32b54893723dcd4feacbcff5d47af00ff6d7ef5c381d2a6ef61d31863415" host="localhost" May 13 00:09:35.380924 containerd[1437]: 2025-05-13 00:09:35.355 [INFO][4309] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:09:35.380924 containerd[1437]: 2025-05-13 00:09:35.355 [INFO][4309] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="d74e32b54893723dcd4feacbcff5d47af00ff6d7ef5c381d2a6ef61d31863415" HandleID="k8s-pod-network.d74e32b54893723dcd4feacbcff5d47af00ff6d7ef5c381d2a6ef61d31863415" Workload="localhost-k8s-calico--kube--controllers--7d84469565--2x9zl-eth0" May 13 00:09:35.381548 containerd[1437]: 2025-05-13 00:09:35.361 [INFO][4268] cni-plugin/k8s.go 386: Populated endpoint ContainerID="d74e32b54893723dcd4feacbcff5d47af00ff6d7ef5c381d2a6ef61d31863415" Namespace="calico-system" Pod="calico-kube-controllers-7d84469565-2x9zl" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7d84469565--2x9zl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7d84469565--2x9zl-eth0", GenerateName:"calico-kube-controllers-7d84469565-", Namespace:"calico-system", SelfLink:"", UID:"9eb635b5-09fc-4daa-ae36-54b042b695f8", ResourceVersion:"816", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 9, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7d84469565", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-7d84469565-2x9zl", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib085c1c372e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:09:35.381548 containerd[1437]: 2025-05-13 00:09:35.361 [INFO][4268] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="d74e32b54893723dcd4feacbcff5d47af00ff6d7ef5c381d2a6ef61d31863415" Namespace="calico-system" Pod="calico-kube-controllers-7d84469565-2x9zl" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7d84469565--2x9zl-eth0" May 13 00:09:35.381548 containerd[1437]: 2025-05-13 00:09:35.361 [INFO][4268] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib085c1c372e ContainerID="d74e32b54893723dcd4feacbcff5d47af00ff6d7ef5c381d2a6ef61d31863415" Namespace="calico-system" Pod="calico-kube-controllers-7d84469565-2x9zl" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7d84469565--2x9zl-eth0" May 13 00:09:35.381548 containerd[1437]: 2025-05-13 00:09:35.364 [INFO][4268] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d74e32b54893723dcd4feacbcff5d47af00ff6d7ef5c381d2a6ef61d31863415" Namespace="calico-system" Pod="calico-kube-controllers-7d84469565-2x9zl" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7d84469565--2x9zl-eth0" May 13 00:09:35.381548 containerd[1437]: 2025-05-13 00:09:35.368 [INFO][4268] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="d74e32b54893723dcd4feacbcff5d47af00ff6d7ef5c381d2a6ef61d31863415" Namespace="calico-system" Pod="calico-kube-controllers-7d84469565-2x9zl" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7d84469565--2x9zl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7d84469565--2x9zl-eth0", GenerateName:"calico-kube-controllers-7d84469565-", Namespace:"calico-system", SelfLink:"", UID:"9eb635b5-09fc-4daa-ae36-54b042b695f8", ResourceVersion:"816", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 9, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7d84469565", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d74e32b54893723dcd4feacbcff5d47af00ff6d7ef5c381d2a6ef61d31863415", Pod:"calico-kube-controllers-7d84469565-2x9zl", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib085c1c372e", MAC:"a6:bd:3a:a0:34:87", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:09:35.381548 containerd[1437]: 2025-05-13 00:09:35.378 [INFO][4268] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="d74e32b54893723dcd4feacbcff5d47af00ff6d7ef5c381d2a6ef61d31863415" Namespace="calico-system" Pod="calico-kube-controllers-7d84469565-2x9zl" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7d84469565--2x9zl-eth0" May 13 00:09:35.396961 containerd[1437]: time="2025-05-13T00:09:35.396847042Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:09:35.397127 containerd[1437]: time="2025-05-13T00:09:35.396981603Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:09:35.397127 containerd[1437]: time="2025-05-13T00:09:35.397009443Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:09:35.398161 containerd[1437]: time="2025-05-13T00:09:35.397505527Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:09:35.419457 systemd[1]: Started cri-containerd-d74e32b54893723dcd4feacbcff5d47af00ff6d7ef5c381d2a6ef61d31863415.scope - libcontainer container d74e32b54893723dcd4feacbcff5d47af00ff6d7ef5c381d2a6ef61d31863415. May 13 00:09:35.431234 systemd-resolved[1312]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 13 00:09:35.447505 containerd[1437]: time="2025-05-13T00:09:35.447452564Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7d84469565-2x9zl,Uid:9eb635b5-09fc-4daa-ae36-54b042b695f8,Namespace:calico-system,Attempt:1,} returns sandbox id \"d74e32b54893723dcd4feacbcff5d47af00ff6d7ef5c381d2a6ef61d31863415\"" May 13 00:09:35.895291 containerd[1437]: time="2025-05-13T00:09:35.895181239Z" level=info msg="StopPodSandbox for \"b40f59446a88db83e41c19037c6d6c91c3c1951d19165cc54e1343407c9994b8\"" May 13 00:09:35.974969 containerd[1437]: 2025-05-13 00:09:35.938 [INFO][4506] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b40f59446a88db83e41c19037c6d6c91c3c1951d19165cc54e1343407c9994b8" May 13 00:09:35.974969 containerd[1437]: 2025-05-13 00:09:35.938 [INFO][4506] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b40f59446a88db83e41c19037c6d6c91c3c1951d19165cc54e1343407c9994b8" iface="eth0" netns="/var/run/netns/cni-3a6a7f13-70c1-2272-8c45-d5b61a16a956" May 13 00:09:35.974969 containerd[1437]: 2025-05-13 00:09:35.939 [INFO][4506] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b40f59446a88db83e41c19037c6d6c91c3c1951d19165cc54e1343407c9994b8" iface="eth0" netns="/var/run/netns/cni-3a6a7f13-70c1-2272-8c45-d5b61a16a956" May 13 00:09:35.974969 containerd[1437]: 2025-05-13 00:09:35.939 [INFO][4506] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b40f59446a88db83e41c19037c6d6c91c3c1951d19165cc54e1343407c9994b8" iface="eth0" netns="/var/run/netns/cni-3a6a7f13-70c1-2272-8c45-d5b61a16a956" May 13 00:09:35.974969 containerd[1437]: 2025-05-13 00:09:35.939 [INFO][4506] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b40f59446a88db83e41c19037c6d6c91c3c1951d19165cc54e1343407c9994b8" May 13 00:09:35.974969 containerd[1437]: 2025-05-13 00:09:35.939 [INFO][4506] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b40f59446a88db83e41c19037c6d6c91c3c1951d19165cc54e1343407c9994b8" May 13 00:09:35.974969 containerd[1437]: 2025-05-13 00:09:35.961 [INFO][4514] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b40f59446a88db83e41c19037c6d6c91c3c1951d19165cc54e1343407c9994b8" HandleID="k8s-pod-network.b40f59446a88db83e41c19037c6d6c91c3c1951d19165cc54e1343407c9994b8" Workload="localhost-k8s-coredns--6f6b679f8f--dsktj-eth0" May 13 00:09:35.974969 containerd[1437]: 2025-05-13 00:09:35.961 [INFO][4514] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:09:35.974969 containerd[1437]: 2025-05-13 00:09:35.961 [INFO][4514] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:09:35.974969 containerd[1437]: 2025-05-13 00:09:35.969 [WARNING][4514] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b40f59446a88db83e41c19037c6d6c91c3c1951d19165cc54e1343407c9994b8" HandleID="k8s-pod-network.b40f59446a88db83e41c19037c6d6c91c3c1951d19165cc54e1343407c9994b8" Workload="localhost-k8s-coredns--6f6b679f8f--dsktj-eth0" May 13 00:09:35.974969 containerd[1437]: 2025-05-13 00:09:35.969 [INFO][4514] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b40f59446a88db83e41c19037c6d6c91c3c1951d19165cc54e1343407c9994b8" HandleID="k8s-pod-network.b40f59446a88db83e41c19037c6d6c91c3c1951d19165cc54e1343407c9994b8" Workload="localhost-k8s-coredns--6f6b679f8f--dsktj-eth0" May 13 00:09:35.974969 containerd[1437]: 2025-05-13 00:09:35.971 [INFO][4514] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:09:35.974969 containerd[1437]: 2025-05-13 00:09:35.973 [INFO][4506] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b40f59446a88db83e41c19037c6d6c91c3c1951d19165cc54e1343407c9994b8" May 13 00:09:35.975564 containerd[1437]: time="2025-05-13T00:09:35.975205474Z" level=info msg="TearDown network for sandbox \"b40f59446a88db83e41c19037c6d6c91c3c1951d19165cc54e1343407c9994b8\" successfully" May 13 00:09:35.975564 containerd[1437]: time="2025-05-13T00:09:35.975236074Z" level=info msg="StopPodSandbox for \"b40f59446a88db83e41c19037c6d6c91c3c1951d19165cc54e1343407c9994b8\" returns successfully" May 13 00:09:35.976083 kubelet[2452]: E0513 00:09:35.976024 2452 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:09:35.976473 containerd[1437]: time="2025-05-13T00:09:35.976433924Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-dsktj,Uid:efa396e0-d0f8-4e2e-92a4-a9b755e3f9de,Namespace:kube-system,Attempt:1,}" May 13 00:09:36.055538 kubelet[2452]: E0513 00:09:36.055509 2452 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:09:36.091407 systemd[1]: run-netns-cni\x2d3a6a7f13\x2d70c1\x2d2272\x2d8c45\x2dd5b61a16a956.mount: Deactivated successfully. May 13 00:09:36.134781 containerd[1437]: time="2025-05-13T00:09:36.134724068Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:09:36.135451 containerd[1437]: time="2025-05-13T00:09:36.135417113Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.3: active requests=0, bytes read=7474935" May 13 00:09:36.136129 containerd[1437]: time="2025-05-13T00:09:36.136082719Z" level=info msg="ImageCreate event name:\"sha256:15faf29e8b518d846c91c15785ff89e783d356ea0f2b22826f47a556ea32645b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:09:36.138289 containerd[1437]: time="2025-05-13T00:09:36.138236055Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:09:36.139419 containerd[1437]: time="2025-05-13T00:09:36.138872820Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.3\" with image id \"sha256:15faf29e8b518d846c91c15785ff89e783d356ea0f2b22826f47a556ea32645b\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\", size \"8844117\" in 897.265331ms" May 13 00:09:36.139419 containerd[1437]: time="2025-05-13T00:09:36.138908060Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\" returns image reference \"sha256:15faf29e8b518d846c91c15785ff89e783d356ea0f2b22826f47a556ea32645b\"" May 13 00:09:36.140448 containerd[1437]: time="2025-05-13T00:09:36.140239231Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" May 13 00:09:36.142411 containerd[1437]: time="2025-05-13T00:09:36.142380007Z" level=info msg="CreateContainer within sandbox \"5604677f3318435141028a4d59e601d9b189ad2734c64f287cf2535078821367\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" May 13 00:09:36.166115 containerd[1437]: time="2025-05-13T00:09:36.165908508Z" level=info msg="CreateContainer within sandbox \"5604677f3318435141028a4d59e601d9b189ad2734c64f287cf2535078821367\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"31d233eed6391b00d7542b3847b7d845eb6f4203d43d41e54d7ec7227194735c\"" May 13 00:09:36.168526 containerd[1437]: time="2025-05-13T00:09:36.167192718Z" level=info msg="StartContainer for \"31d233eed6391b00d7542b3847b7d845eb6f4203d43d41e54d7ec7227194735c\"" May 13 00:09:36.198473 systemd[1]: Started cri-containerd-31d233eed6391b00d7542b3847b7d845eb6f4203d43d41e54d7ec7227194735c.scope - libcontainer container 31d233eed6391b00d7542b3847b7d845eb6f4203d43d41e54d7ec7227194735c. May 13 00:09:36.212463 systemd-networkd[1374]: calic760c87c48c: Link UP May 13 00:09:36.214962 systemd-networkd[1374]: calic760c87c48c: Gained carrier May 13 00:09:36.233193 containerd[1437]: 2025-05-13 00:09:36.025 [INFO][4524] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--6f6b679f8f--dsktj-eth0 coredns-6f6b679f8f- kube-system efa396e0-d0f8-4e2e-92a4-a9b755e3f9de 839 0 2025-05-13 00:09:07 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-6f6b679f8f-dsktj eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calic760c87c48c [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="14e7adcb2ec16def4d8da3efa377df8f7f4e4fd206fd16c4d2fc6e42259f3789" Namespace="kube-system" Pod="coredns-6f6b679f8f-dsktj" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--dsktj-" May 13 00:09:36.233193 containerd[1437]: 2025-05-13 00:09:36.026 [INFO][4524] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="14e7adcb2ec16def4d8da3efa377df8f7f4e4fd206fd16c4d2fc6e42259f3789" Namespace="kube-system" Pod="coredns-6f6b679f8f-dsktj" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--dsktj-eth0" May 13 00:09:36.233193 containerd[1437]: 2025-05-13 00:09:36.064 [INFO][4537] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="14e7adcb2ec16def4d8da3efa377df8f7f4e4fd206fd16c4d2fc6e42259f3789" HandleID="k8s-pod-network.14e7adcb2ec16def4d8da3efa377df8f7f4e4fd206fd16c4d2fc6e42259f3789" Workload="localhost-k8s-coredns--6f6b679f8f--dsktj-eth0" May 13 00:09:36.233193 containerd[1437]: 2025-05-13 00:09:36.078 [INFO][4537] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="14e7adcb2ec16def4d8da3efa377df8f7f4e4fd206fd16c4d2fc6e42259f3789" HandleID="k8s-pod-network.14e7adcb2ec16def4d8da3efa377df8f7f4e4fd206fd16c4d2fc6e42259f3789" Workload="localhost-k8s-coredns--6f6b679f8f--dsktj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400028c970), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-6f6b679f8f-dsktj", "timestamp":"2025-05-13 00:09:36.064275526 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 13 00:09:36.233193 containerd[1437]: 2025-05-13 00:09:36.078 [INFO][4537] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:09:36.233193 containerd[1437]: 2025-05-13 00:09:36.078 [INFO][4537] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:09:36.233193 containerd[1437]: 2025-05-13 00:09:36.078 [INFO][4537] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 13 00:09:36.233193 containerd[1437]: 2025-05-13 00:09:36.081 [INFO][4537] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.14e7adcb2ec16def4d8da3efa377df8f7f4e4fd206fd16c4d2fc6e42259f3789" host="localhost" May 13 00:09:36.233193 containerd[1437]: 2025-05-13 00:09:36.177 [INFO][4537] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 13 00:09:36.233193 containerd[1437]: 2025-05-13 00:09:36.184 [INFO][4537] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 13 00:09:36.233193 containerd[1437]: 2025-05-13 00:09:36.189 [INFO][4537] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 13 00:09:36.233193 containerd[1437]: 2025-05-13 00:09:36.191 [INFO][4537] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 13 00:09:36.233193 containerd[1437]: 2025-05-13 00:09:36.191 [INFO][4537] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.14e7adcb2ec16def4d8da3efa377df8f7f4e4fd206fd16c4d2fc6e42259f3789" host="localhost" May 13 00:09:36.233193 containerd[1437]: 2025-05-13 00:09:36.193 [INFO][4537] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.14e7adcb2ec16def4d8da3efa377df8f7f4e4fd206fd16c4d2fc6e42259f3789 May 13 00:09:36.233193 containerd[1437]: 2025-05-13 00:09:36.200 [INFO][4537] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.14e7adcb2ec16def4d8da3efa377df8f7f4e4fd206fd16c4d2fc6e42259f3789" host="localhost" May 13 00:09:36.233193 containerd[1437]: 2025-05-13 00:09:36.207 [INFO][4537] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.14e7adcb2ec16def4d8da3efa377df8f7f4e4fd206fd16c4d2fc6e42259f3789" host="localhost" May 13 00:09:36.233193 containerd[1437]: 2025-05-13 00:09:36.207 [INFO][4537] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.14e7adcb2ec16def4d8da3efa377df8f7f4e4fd206fd16c4d2fc6e42259f3789" host="localhost" May 13 00:09:36.233193 containerd[1437]: 2025-05-13 00:09:36.207 [INFO][4537] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:09:36.233193 containerd[1437]: 2025-05-13 00:09:36.207 [INFO][4537] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="14e7adcb2ec16def4d8da3efa377df8f7f4e4fd206fd16c4d2fc6e42259f3789" HandleID="k8s-pod-network.14e7adcb2ec16def4d8da3efa377df8f7f4e4fd206fd16c4d2fc6e42259f3789" Workload="localhost-k8s-coredns--6f6b679f8f--dsktj-eth0" May 13 00:09:36.234343 containerd[1437]: 2025-05-13 00:09:36.210 [INFO][4524] cni-plugin/k8s.go 386: Populated endpoint ContainerID="14e7adcb2ec16def4d8da3efa377df8f7f4e4fd206fd16c4d2fc6e42259f3789" Namespace="kube-system" Pod="coredns-6f6b679f8f-dsktj" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--dsktj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--dsktj-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"efa396e0-d0f8-4e2e-92a4-a9b755e3f9de", ResourceVersion:"839", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 9, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-6f6b679f8f-dsktj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic760c87c48c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:09:36.234343 containerd[1437]: 2025-05-13 00:09:36.210 [INFO][4524] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="14e7adcb2ec16def4d8da3efa377df8f7f4e4fd206fd16c4d2fc6e42259f3789" Namespace="kube-system" Pod="coredns-6f6b679f8f-dsktj" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--dsktj-eth0" May 13 00:09:36.234343 containerd[1437]: 2025-05-13 00:09:36.210 [INFO][4524] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic760c87c48c ContainerID="14e7adcb2ec16def4d8da3efa377df8f7f4e4fd206fd16c4d2fc6e42259f3789" Namespace="kube-system" Pod="coredns-6f6b679f8f-dsktj" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--dsktj-eth0" May 13 00:09:36.234343 containerd[1437]: 2025-05-13 00:09:36.214 [INFO][4524] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="14e7adcb2ec16def4d8da3efa377df8f7f4e4fd206fd16c4d2fc6e42259f3789" Namespace="kube-system" Pod="coredns-6f6b679f8f-dsktj" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--dsktj-eth0" May 13 00:09:36.234343 containerd[1437]: 2025-05-13 00:09:36.215 [INFO][4524] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="14e7adcb2ec16def4d8da3efa377df8f7f4e4fd206fd16c4d2fc6e42259f3789" Namespace="kube-system" Pod="coredns-6f6b679f8f-dsktj" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--dsktj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--dsktj-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"efa396e0-d0f8-4e2e-92a4-a9b755e3f9de", ResourceVersion:"839", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 9, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"14e7adcb2ec16def4d8da3efa377df8f7f4e4fd206fd16c4d2fc6e42259f3789", Pod:"coredns-6f6b679f8f-dsktj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic760c87c48c", MAC:"62:24:5e:2d:07:2a", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:09:36.234343 containerd[1437]: 2025-05-13 00:09:36.229 [INFO][4524] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="14e7adcb2ec16def4d8da3efa377df8f7f4e4fd206fd16c4d2fc6e42259f3789" Namespace="kube-system" Pod="coredns-6f6b679f8f-dsktj" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--dsktj-eth0" May 13 00:09:36.236486 containerd[1437]: time="2025-05-13T00:09:36.236362771Z" level=info msg="StartContainer for \"31d233eed6391b00d7542b3847b7d845eb6f4203d43d41e54d7ec7227194735c\" returns successfully" May 13 00:09:36.270516 containerd[1437]: time="2025-05-13T00:09:36.270423513Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:09:36.270516 containerd[1437]: time="2025-05-13T00:09:36.270482033Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:09:36.270516 containerd[1437]: time="2025-05-13T00:09:36.270501273Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:09:36.270707 containerd[1437]: time="2025-05-13T00:09:36.270590514Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:09:36.289445 systemd[1]: Started cri-containerd-14e7adcb2ec16def4d8da3efa377df8f7f4e4fd206fd16c4d2fc6e42259f3789.scope - libcontainer container 14e7adcb2ec16def4d8da3efa377df8f7f4e4fd206fd16c4d2fc6e42259f3789. May 13 00:09:36.299944 systemd-resolved[1312]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 13 00:09:36.316613 containerd[1437]: time="2025-05-13T00:09:36.316574188Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-dsktj,Uid:efa396e0-d0f8-4e2e-92a4-a9b755e3f9de,Namespace:kube-system,Attempt:1,} returns sandbox id \"14e7adcb2ec16def4d8da3efa377df8f7f4e4fd206fd16c4d2fc6e42259f3789\"" May 13 00:09:36.317470 kubelet[2452]: E0513 00:09:36.317424 2452 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:09:36.321085 containerd[1437]: time="2025-05-13T00:09:36.320937022Z" level=info msg="CreateContainer within sandbox \"14e7adcb2ec16def4d8da3efa377df8f7f4e4fd206fd16c4d2fc6e42259f3789\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 13 00:09:36.331666 containerd[1437]: time="2025-05-13T00:09:36.331610544Z" level=info msg="CreateContainer within sandbox \"14e7adcb2ec16def4d8da3efa377df8f7f4e4fd206fd16c4d2fc6e42259f3789\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2b12bbc3abac47a8288906806037cfcb73c5f176102780d3bd17f8d9a5ccdf11\"" May 13 00:09:36.332749 containerd[1437]: time="2025-05-13T00:09:36.332576231Z" level=info msg="StartContainer for \"2b12bbc3abac47a8288906806037cfcb73c5f176102780d3bd17f8d9a5ccdf11\"" May 13 00:09:36.365443 systemd[1]: Started cri-containerd-2b12bbc3abac47a8288906806037cfcb73c5f176102780d3bd17f8d9a5ccdf11.scope - libcontainer container 2b12bbc3abac47a8288906806037cfcb73c5f176102780d3bd17f8d9a5ccdf11. May 13 00:09:36.387914 containerd[1437]: time="2025-05-13T00:09:36.387777896Z" level=info msg="StartContainer for \"2b12bbc3abac47a8288906806037cfcb73c5f176102780d3bd17f8d9a5ccdf11\" returns successfully" May 13 00:09:37.004423 systemd-networkd[1374]: calic0cf0c8d8df: Gained IPv6LL May 13 00:09:37.062672 kubelet[2452]: E0513 00:09:37.062600 2452 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:09:37.068445 systemd-networkd[1374]: cali66ff05710cc: Gained IPv6LL May 13 00:09:37.165183 systemd[1]: Started sshd@8-10.0.0.16:22-10.0.0.1:58790.service - OpenSSH per-connection server daemon (10.0.0.1:58790). May 13 00:09:37.212958 sshd[4687]: Accepted publickey for core from 10.0.0.1 port 58790 ssh2: RSA SHA256:ilwLBGyeejLKSU0doRti0j2W4iQ88Tp+35jhkd0iwiU May 13 00:09:37.214719 sshd[4687]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:09:37.219915 systemd-logind[1419]: New session 9 of user core. May 13 00:09:37.226436 systemd[1]: Started session-9.scope - Session 9 of User core. May 13 00:09:37.259385 systemd-networkd[1374]: calib085c1c372e: Gained IPv6LL May 13 00:09:37.496003 sshd[4687]: pam_unix(sshd:session): session closed for user core May 13 00:09:37.500364 systemd[1]: sshd@8-10.0.0.16:22-10.0.0.1:58790.service: Deactivated successfully. May 13 00:09:37.503915 systemd[1]: session-9.scope: Deactivated successfully. May 13 00:09:37.504669 systemd-logind[1419]: Session 9 logged out. Waiting for processes to exit. May 13 00:09:37.505568 systemd-logind[1419]: Removed session 9. May 13 00:09:37.576453 containerd[1437]: time="2025-05-13T00:09:37.576335555Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:09:37.577332 containerd[1437]: time="2025-05-13T00:09:37.577303242Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=40247603" May 13 00:09:37.579293 containerd[1437]: time="2025-05-13T00:09:37.578306010Z" level=info msg="ImageCreate event name:\"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:09:37.580496 containerd[1437]: time="2025-05-13T00:09:37.580440226Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:09:37.581404 containerd[1437]: time="2025-05-13T00:09:37.581367713Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"41616801\" in 1.441075842s" May 13 00:09:37.581479 containerd[1437]: time="2025-05-13T00:09:37.581407553Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\"" May 13 00:09:37.582772 containerd[1437]: time="2025-05-13T00:09:37.582602682Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\"" May 13 00:09:37.584032 containerd[1437]: time="2025-05-13T00:09:37.583860691Z" level=info msg="CreateContainer within sandbox \"d67a38149d30d385f86a8f1a8138837fae957162ebd103a74a2d08b870c759f8\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 13 00:09:37.595304 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3500135420.mount: Deactivated successfully. May 13 00:09:37.601520 containerd[1437]: time="2025-05-13T00:09:37.601465703Z" level=info msg="CreateContainer within sandbox \"d67a38149d30d385f86a8f1a8138837fae957162ebd103a74a2d08b870c759f8\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"ddc1a86f70868cfb019f04fad786b15b6c01ba850648d479bee032f06167b67a\"" May 13 00:09:37.602178 containerd[1437]: time="2025-05-13T00:09:37.602002067Z" level=info msg="StartContainer for \"ddc1a86f70868cfb019f04fad786b15b6c01ba850648d479bee032f06167b67a\"" May 13 00:09:37.656464 systemd[1]: Started cri-containerd-ddc1a86f70868cfb019f04fad786b15b6c01ba850648d479bee032f06167b67a.scope - libcontainer container ddc1a86f70868cfb019f04fad786b15b6c01ba850648d479bee032f06167b67a. May 13 00:09:37.692476 containerd[1437]: time="2025-05-13T00:09:37.692424742Z" level=info msg="StartContainer for \"ddc1a86f70868cfb019f04fad786b15b6c01ba850648d479bee032f06167b67a\" returns successfully" May 13 00:09:37.896973 containerd[1437]: time="2025-05-13T00:09:37.896832470Z" level=info msg="StopPodSandbox for \"e8678be89270303df5c1f3c49079f4fd9cc7c27e6bc86107eaa1c2731c904c11\"" May 13 00:09:37.899443 systemd-networkd[1374]: calic760c87c48c: Gained IPv6LL May 13 00:09:37.945982 kubelet[2452]: I0513 00:09:37.945787 2452 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-dsktj" podStartSLOduration=30.945766155 podStartE2EDuration="30.945766155s" podCreationTimestamp="2025-05-13 00:09:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 00:09:37.081226536 +0000 UTC m=+36.270601136" watchObservedRunningTime="2025-05-13 00:09:37.945766155 +0000 UTC m=+37.135140755" May 13 00:09:37.985764 containerd[1437]: 2025-05-13 00:09:37.944 [INFO][4768] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e8678be89270303df5c1f3c49079f4fd9cc7c27e6bc86107eaa1c2731c904c11" May 13 00:09:37.985764 containerd[1437]: 2025-05-13 00:09:37.944 [INFO][4768] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e8678be89270303df5c1f3c49079f4fd9cc7c27e6bc86107eaa1c2731c904c11" iface="eth0" netns="/var/run/netns/cni-8e2f4642-2034-344f-ab08-6d2a2acb750a" May 13 00:09:37.985764 containerd[1437]: 2025-05-13 00:09:37.944 [INFO][4768] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e8678be89270303df5c1f3c49079f4fd9cc7c27e6bc86107eaa1c2731c904c11" iface="eth0" netns="/var/run/netns/cni-8e2f4642-2034-344f-ab08-6d2a2acb750a" May 13 00:09:37.985764 containerd[1437]: 2025-05-13 00:09:37.945 [INFO][4768] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e8678be89270303df5c1f3c49079f4fd9cc7c27e6bc86107eaa1c2731c904c11" iface="eth0" netns="/var/run/netns/cni-8e2f4642-2034-344f-ab08-6d2a2acb750a" May 13 00:09:37.985764 containerd[1437]: 2025-05-13 00:09:37.945 [INFO][4768] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e8678be89270303df5c1f3c49079f4fd9cc7c27e6bc86107eaa1c2731c904c11" May 13 00:09:37.985764 containerd[1437]: 2025-05-13 00:09:37.945 [INFO][4768] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e8678be89270303df5c1f3c49079f4fd9cc7c27e6bc86107eaa1c2731c904c11" May 13 00:09:37.985764 containerd[1437]: 2025-05-13 00:09:37.966 [INFO][4778] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e8678be89270303df5c1f3c49079f4fd9cc7c27e6bc86107eaa1c2731c904c11" HandleID="k8s-pod-network.e8678be89270303df5c1f3c49079f4fd9cc7c27e6bc86107eaa1c2731c904c11" Workload="localhost-k8s-calico--apiserver--78445c69b9--vz5v9-eth0" May 13 00:09:37.985764 containerd[1437]: 2025-05-13 00:09:37.966 [INFO][4778] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:09:37.985764 containerd[1437]: 2025-05-13 00:09:37.966 [INFO][4778] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:09:37.985764 containerd[1437]: 2025-05-13 00:09:37.976 [WARNING][4778] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e8678be89270303df5c1f3c49079f4fd9cc7c27e6bc86107eaa1c2731c904c11" HandleID="k8s-pod-network.e8678be89270303df5c1f3c49079f4fd9cc7c27e6bc86107eaa1c2731c904c11" Workload="localhost-k8s-calico--apiserver--78445c69b9--vz5v9-eth0" May 13 00:09:37.985764 containerd[1437]: 2025-05-13 00:09:37.976 [INFO][4778] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e8678be89270303df5c1f3c49079f4fd9cc7c27e6bc86107eaa1c2731c904c11" HandleID="k8s-pod-network.e8678be89270303df5c1f3c49079f4fd9cc7c27e6bc86107eaa1c2731c904c11" Workload="localhost-k8s-calico--apiserver--78445c69b9--vz5v9-eth0" May 13 00:09:37.985764 containerd[1437]: 2025-05-13 00:09:37.982 [INFO][4778] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:09:37.985764 containerd[1437]: 2025-05-13 00:09:37.983 [INFO][4768] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e8678be89270303df5c1f3c49079f4fd9cc7c27e6bc86107eaa1c2731c904c11" May 13 00:09:37.986334 containerd[1437]: time="2025-05-13T00:09:37.985959055Z" level=info msg="TearDown network for sandbox \"e8678be89270303df5c1f3c49079f4fd9cc7c27e6bc86107eaa1c2731c904c11\" successfully" May 13 00:09:37.986334 containerd[1437]: time="2025-05-13T00:09:37.985988016Z" level=info msg="StopPodSandbox for \"e8678be89270303df5c1f3c49079f4fd9cc7c27e6bc86107eaa1c2731c904c11\" returns successfully" May 13 00:09:37.986725 containerd[1437]: time="2025-05-13T00:09:37.986700421Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-78445c69b9-vz5v9,Uid:9a6f6bd6-2495-464c-927f-4c4a69a0b8bc,Namespace:calico-apiserver,Attempt:1,}" May 13 00:09:38.075916 kubelet[2452]: E0513 00:09:38.075867 2452 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:09:38.089184 kubelet[2452]: I0513 00:09:38.089112 2452 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-78445c69b9-wbfwl" podStartSLOduration=22.872640203 podStartE2EDuration="25.089083287s" podCreationTimestamp="2025-05-13 00:09:13 +0000 UTC" firstStartedPulling="2025-05-13 00:09:35.366006557 +0000 UTC m=+34.555381157" lastFinishedPulling="2025-05-13 00:09:37.582449601 +0000 UTC m=+36.771824241" observedRunningTime="2025-05-13 00:09:38.088682444 +0000 UTC m=+37.278057044" watchObservedRunningTime="2025-05-13 00:09:38.089083287 +0000 UTC m=+37.278457887" May 13 00:09:38.093485 systemd[1]: run-netns-cni\x2d8e2f4642\x2d2034\x2d344f\x2dab08\x2d6d2a2acb750a.mount: Deactivated successfully. May 13 00:09:38.192718 systemd-networkd[1374]: calid53d91d7555: Link UP May 13 00:09:38.193221 systemd-networkd[1374]: calid53d91d7555: Gained carrier May 13 00:09:38.206644 containerd[1437]: 2025-05-13 00:09:38.029 [INFO][4787] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--78445c69b9--vz5v9-eth0 calico-apiserver-78445c69b9- calico-apiserver 9a6f6bd6-2495-464c-927f-4c4a69a0b8bc 876 0 2025-05-13 00:09:13 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:78445c69b9 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-78445c69b9-vz5v9 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calid53d91d7555 [] []}} ContainerID="9d8f6bda1c259dab3ec6dff4292e4e3a8aa7747ecccf2b8a75a76a504d1b72a9" Namespace="calico-apiserver" Pod="calico-apiserver-78445c69b9-vz5v9" WorkloadEndpoint="localhost-k8s-calico--apiserver--78445c69b9--vz5v9-" May 13 00:09:38.206644 containerd[1437]: 2025-05-13 00:09:38.029 [INFO][4787] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="9d8f6bda1c259dab3ec6dff4292e4e3a8aa7747ecccf2b8a75a76a504d1b72a9" Namespace="calico-apiserver" Pod="calico-apiserver-78445c69b9-vz5v9" WorkloadEndpoint="localhost-k8s-calico--apiserver--78445c69b9--vz5v9-eth0" May 13 00:09:38.206644 containerd[1437]: 2025-05-13 00:09:38.057 [INFO][4801] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9d8f6bda1c259dab3ec6dff4292e4e3a8aa7747ecccf2b8a75a76a504d1b72a9" HandleID="k8s-pod-network.9d8f6bda1c259dab3ec6dff4292e4e3a8aa7747ecccf2b8a75a76a504d1b72a9" Workload="localhost-k8s-calico--apiserver--78445c69b9--vz5v9-eth0" May 13 00:09:38.206644 containerd[1437]: 2025-05-13 00:09:38.072 [INFO][4801] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9d8f6bda1c259dab3ec6dff4292e4e3a8aa7747ecccf2b8a75a76a504d1b72a9" HandleID="k8s-pod-network.9d8f6bda1c259dab3ec6dff4292e4e3a8aa7747ecccf2b8a75a76a504d1b72a9" Workload="localhost-k8s-calico--apiserver--78445c69b9--vz5v9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000289a50), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-78445c69b9-vz5v9", "timestamp":"2025-05-13 00:09:38.057337137 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 13 00:09:38.206644 containerd[1437]: 2025-05-13 00:09:38.072 [INFO][4801] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:09:38.206644 containerd[1437]: 2025-05-13 00:09:38.072 [INFO][4801] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:09:38.206644 containerd[1437]: 2025-05-13 00:09:38.072 [INFO][4801] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 13 00:09:38.206644 containerd[1437]: 2025-05-13 00:09:38.075 [INFO][4801] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.9d8f6bda1c259dab3ec6dff4292e4e3a8aa7747ecccf2b8a75a76a504d1b72a9" host="localhost" May 13 00:09:38.206644 containerd[1437]: 2025-05-13 00:09:38.168 [INFO][4801] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 13 00:09:38.206644 containerd[1437]: 2025-05-13 00:09:38.173 [INFO][4801] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 13 00:09:38.206644 containerd[1437]: 2025-05-13 00:09:38.174 [INFO][4801] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 13 00:09:38.206644 containerd[1437]: 2025-05-13 00:09:38.176 [INFO][4801] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 13 00:09:38.206644 containerd[1437]: 2025-05-13 00:09:38.176 [INFO][4801] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9d8f6bda1c259dab3ec6dff4292e4e3a8aa7747ecccf2b8a75a76a504d1b72a9" host="localhost" May 13 00:09:38.206644 containerd[1437]: 2025-05-13 00:09:38.177 [INFO][4801] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.9d8f6bda1c259dab3ec6dff4292e4e3a8aa7747ecccf2b8a75a76a504d1b72a9 May 13 00:09:38.206644 containerd[1437]: 2025-05-13 00:09:38.181 [INFO][4801] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9d8f6bda1c259dab3ec6dff4292e4e3a8aa7747ecccf2b8a75a76a504d1b72a9" host="localhost" May 13 00:09:38.206644 containerd[1437]: 2025-05-13 00:09:38.187 [INFO][4801] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.9d8f6bda1c259dab3ec6dff4292e4e3a8aa7747ecccf2b8a75a76a504d1b72a9" host="localhost" May 13 00:09:38.206644 containerd[1437]: 2025-05-13 00:09:38.187 [INFO][4801] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.9d8f6bda1c259dab3ec6dff4292e4e3a8aa7747ecccf2b8a75a76a504d1b72a9" host="localhost" May 13 00:09:38.206644 containerd[1437]: 2025-05-13 00:09:38.187 [INFO][4801] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:09:38.206644 containerd[1437]: 2025-05-13 00:09:38.188 [INFO][4801] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="9d8f6bda1c259dab3ec6dff4292e4e3a8aa7747ecccf2b8a75a76a504d1b72a9" HandleID="k8s-pod-network.9d8f6bda1c259dab3ec6dff4292e4e3a8aa7747ecccf2b8a75a76a504d1b72a9" Workload="localhost-k8s-calico--apiserver--78445c69b9--vz5v9-eth0" May 13 00:09:38.207397 containerd[1437]: 2025-05-13 00:09:38.190 [INFO][4787] cni-plugin/k8s.go 386: Populated endpoint ContainerID="9d8f6bda1c259dab3ec6dff4292e4e3a8aa7747ecccf2b8a75a76a504d1b72a9" Namespace="calico-apiserver" Pod="calico-apiserver-78445c69b9-vz5v9" WorkloadEndpoint="localhost-k8s-calico--apiserver--78445c69b9--vz5v9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--78445c69b9--vz5v9-eth0", GenerateName:"calico-apiserver-78445c69b9-", Namespace:"calico-apiserver", SelfLink:"", UID:"9a6f6bd6-2495-464c-927f-4c4a69a0b8bc", ResourceVersion:"876", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 9, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"78445c69b9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-78445c69b9-vz5v9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid53d91d7555", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:09:38.207397 containerd[1437]: 2025-05-13 00:09:38.190 [INFO][4787] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.134/32] ContainerID="9d8f6bda1c259dab3ec6dff4292e4e3a8aa7747ecccf2b8a75a76a504d1b72a9" Namespace="calico-apiserver" Pod="calico-apiserver-78445c69b9-vz5v9" WorkloadEndpoint="localhost-k8s-calico--apiserver--78445c69b9--vz5v9-eth0" May 13 00:09:38.207397 containerd[1437]: 2025-05-13 00:09:38.190 [INFO][4787] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid53d91d7555 ContainerID="9d8f6bda1c259dab3ec6dff4292e4e3a8aa7747ecccf2b8a75a76a504d1b72a9" Namespace="calico-apiserver" Pod="calico-apiserver-78445c69b9-vz5v9" WorkloadEndpoint="localhost-k8s-calico--apiserver--78445c69b9--vz5v9-eth0" May 13 00:09:38.207397 containerd[1437]: 2025-05-13 00:09:38.193 [INFO][4787] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9d8f6bda1c259dab3ec6dff4292e4e3a8aa7747ecccf2b8a75a76a504d1b72a9" Namespace="calico-apiserver" Pod="calico-apiserver-78445c69b9-vz5v9" WorkloadEndpoint="localhost-k8s-calico--apiserver--78445c69b9--vz5v9-eth0" May 13 00:09:38.207397 containerd[1437]: 2025-05-13 00:09:38.194 [INFO][4787] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="9d8f6bda1c259dab3ec6dff4292e4e3a8aa7747ecccf2b8a75a76a504d1b72a9" Namespace="calico-apiserver" Pod="calico-apiserver-78445c69b9-vz5v9" WorkloadEndpoint="localhost-k8s-calico--apiserver--78445c69b9--vz5v9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--78445c69b9--vz5v9-eth0", GenerateName:"calico-apiserver-78445c69b9-", Namespace:"calico-apiserver", SelfLink:"", UID:"9a6f6bd6-2495-464c-927f-4c4a69a0b8bc", ResourceVersion:"876", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 9, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"78445c69b9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9d8f6bda1c259dab3ec6dff4292e4e3a8aa7747ecccf2b8a75a76a504d1b72a9", Pod:"calico-apiserver-78445c69b9-vz5v9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid53d91d7555", MAC:"52:16:50:b5:d4:a0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:09:38.207397 containerd[1437]: 2025-05-13 00:09:38.202 [INFO][4787] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="9d8f6bda1c259dab3ec6dff4292e4e3a8aa7747ecccf2b8a75a76a504d1b72a9" Namespace="calico-apiserver" Pod="calico-apiserver-78445c69b9-vz5v9" WorkloadEndpoint="localhost-k8s-calico--apiserver--78445c69b9--vz5v9-eth0" May 13 00:09:38.233804 containerd[1437]: time="2025-05-13T00:09:38.233699257Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:09:38.233804 containerd[1437]: time="2025-05-13T00:09:38.233763057Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:09:38.233804 containerd[1437]: time="2025-05-13T00:09:38.233778297Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:09:38.233978 containerd[1437]: time="2025-05-13T00:09:38.233857338Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:09:38.261446 systemd[1]: Started cri-containerd-9d8f6bda1c259dab3ec6dff4292e4e3a8aa7747ecccf2b8a75a76a504d1b72a9.scope - libcontainer container 9d8f6bda1c259dab3ec6dff4292e4e3a8aa7747ecccf2b8a75a76a504d1b72a9. May 13 00:09:38.272253 systemd-resolved[1312]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 13 00:09:38.289656 containerd[1437]: time="2025-05-13T00:09:38.289616543Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-78445c69b9-vz5v9,Uid:9a6f6bd6-2495-464c-927f-4c4a69a0b8bc,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"9d8f6bda1c259dab3ec6dff4292e4e3a8aa7747ecccf2b8a75a76a504d1b72a9\"" May 13 00:09:38.292619 containerd[1437]: time="2025-05-13T00:09:38.292570204Z" level=info msg="CreateContainer within sandbox \"9d8f6bda1c259dab3ec6dff4292e4e3a8aa7747ecccf2b8a75a76a504d1b72a9\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 13 00:09:38.307151 containerd[1437]: time="2025-05-13T00:09:38.307086269Z" level=info msg="CreateContainer within sandbox \"9d8f6bda1c259dab3ec6dff4292e4e3a8aa7747ecccf2b8a75a76a504d1b72a9\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"c08ae2564572baf6094557853642f25bfb759ae1a7eb7123b81ef0873b925ccc\"" May 13 00:09:38.308083 containerd[1437]: time="2025-05-13T00:09:38.307978956Z" level=info msg="StartContainer for \"c08ae2564572baf6094557853642f25bfb759ae1a7eb7123b81ef0873b925ccc\"" May 13 00:09:38.335426 systemd[1]: Started cri-containerd-c08ae2564572baf6094557853642f25bfb759ae1a7eb7123b81ef0873b925ccc.scope - libcontainer container c08ae2564572baf6094557853642f25bfb759ae1a7eb7123b81ef0873b925ccc. May 13 00:09:38.382863 containerd[1437]: time="2025-05-13T00:09:38.382821739Z" level=info msg="StartContainer for \"c08ae2564572baf6094557853642f25bfb759ae1a7eb7123b81ef0873b925ccc\" returns successfully" May 13 00:09:39.079984 kubelet[2452]: I0513 00:09:39.079879 2452 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 13 00:09:39.080762 kubelet[2452]: E0513 00:09:39.080740 2452 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:09:39.191513 containerd[1437]: time="2025-05-13T00:09:39.191450291Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:09:39.192893 containerd[1437]: time="2025-05-13T00:09:39.192853021Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.3: active requests=0, bytes read=32554116" May 13 00:09:39.195293 containerd[1437]: time="2025-05-13T00:09:39.194170870Z" level=info msg="ImageCreate event name:\"sha256:ec7c64189a2fd01b24b044fea1840d441e9884a0df32c2e9d6982cfbbea1f814\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:09:39.196854 containerd[1437]: time="2025-05-13T00:09:39.196793688Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:09:39.198308 containerd[1437]: time="2025-05-13T00:09:39.198241419Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" with image id \"sha256:ec7c64189a2fd01b24b044fea1840d441e9884a0df32c2e9d6982cfbbea1f814\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\", size \"33923266\" in 1.615599217s" May 13 00:09:39.198308 containerd[1437]: time="2025-05-13T00:09:39.198307379Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" returns image reference \"sha256:ec7c64189a2fd01b24b044fea1840d441e9884a0df32c2e9d6982cfbbea1f814\"" May 13 00:09:39.200814 containerd[1437]: time="2025-05-13T00:09:39.200776197Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\"" May 13 00:09:39.210652 containerd[1437]: time="2025-05-13T00:09:39.210598986Z" level=info msg="CreateContainer within sandbox \"d74e32b54893723dcd4feacbcff5d47af00ff6d7ef5c381d2a6ef61d31863415\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" May 13 00:09:39.247564 containerd[1437]: time="2025-05-13T00:09:39.247501726Z" level=info msg="CreateContainer within sandbox \"d74e32b54893723dcd4feacbcff5d47af00ff6d7ef5c381d2a6ef61d31863415\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"2249cfd9e85400f2bd60b6f059cd31e634cac92808eeb9816c5684d5ea84600e\"" May 13 00:09:39.248358 containerd[1437]: time="2025-05-13T00:09:39.248319972Z" level=info msg="StartContainer for \"2249cfd9e85400f2bd60b6f059cd31e634cac92808eeb9816c5684d5ea84600e\"" May 13 00:09:39.284129 systemd[1]: Started cri-containerd-2249cfd9e85400f2bd60b6f059cd31e634cac92808eeb9816c5684d5ea84600e.scope - libcontainer container 2249cfd9e85400f2bd60b6f059cd31e634cac92808eeb9816c5684d5ea84600e. May 13 00:09:39.325067 containerd[1437]: time="2025-05-13T00:09:39.324817432Z" level=info msg="StartContainer for \"2249cfd9e85400f2bd60b6f059cd31e634cac92808eeb9816c5684d5ea84600e\" returns successfully" May 13 00:09:40.102408 kubelet[2452]: E0513 00:09:40.102165 2452 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:09:40.102807 kubelet[2452]: I0513 00:09:40.102570 2452 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 13 00:09:40.121283 kubelet[2452]: I0513 00:09:40.120420 2452 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-78445c69b9-vz5v9" podStartSLOduration=27.120392986 podStartE2EDuration="27.120392986s" podCreationTimestamp="2025-05-13 00:09:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 00:09:39.095482733 +0000 UTC m=+38.284857293" watchObservedRunningTime="2025-05-13 00:09:40.120392986 +0000 UTC m=+39.309767586" May 13 00:09:40.204612 systemd-networkd[1374]: calid53d91d7555: Gained IPv6LL May 13 00:09:40.260923 containerd[1437]: time="2025-05-13T00:09:40.260873751Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:09:40.261403 containerd[1437]: time="2025-05-13T00:09:40.261333275Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3: active requests=0, bytes read=13124299" May 13 00:09:40.262075 containerd[1437]: time="2025-05-13T00:09:40.262046560Z" level=info msg="ImageCreate event name:\"sha256:a91b1f00752edc175f270a01b33683fa80818734aa2274388785eaf3364315dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:09:40.264209 containerd[1437]: time="2025-05-13T00:09:40.264171534Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:09:40.264896 containerd[1437]: time="2025-05-13T00:09:40.264857659Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" with image id \"sha256:a91b1f00752edc175f270a01b33683fa80818734aa2274388785eaf3364315dc\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\", size \"14493433\" in 1.063943702s" May 13 00:09:40.264947 containerd[1437]: time="2025-05-13T00:09:40.264894499Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" returns image reference \"sha256:a91b1f00752edc175f270a01b33683fa80818734aa2274388785eaf3364315dc\"" May 13 00:09:40.266926 containerd[1437]: time="2025-05-13T00:09:40.266889553Z" level=info msg="CreateContainer within sandbox \"5604677f3318435141028a4d59e601d9b189ad2734c64f287cf2535078821367\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" May 13 00:09:40.281721 containerd[1437]: time="2025-05-13T00:09:40.281677454Z" level=info msg="CreateContainer within sandbox \"5604677f3318435141028a4d59e601d9b189ad2734c64f287cf2535078821367\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"027fa3526d1a49be84bbbc8eedfd03edb4e9b46c6d70db8a9ddf592dd6f2da91\"" May 13 00:09:40.283587 containerd[1437]: time="2025-05-13T00:09:40.282332939Z" level=info msg="StartContainer for \"027fa3526d1a49be84bbbc8eedfd03edb4e9b46c6d70db8a9ddf592dd6f2da91\"" May 13 00:09:40.309419 systemd[1]: Started cri-containerd-027fa3526d1a49be84bbbc8eedfd03edb4e9b46c6d70db8a9ddf592dd6f2da91.scope - libcontainer container 027fa3526d1a49be84bbbc8eedfd03edb4e9b46c6d70db8a9ddf592dd6f2da91. May 13 00:09:40.333825 containerd[1437]: time="2025-05-13T00:09:40.333777853Z" level=info msg="StartContainer for \"027fa3526d1a49be84bbbc8eedfd03edb4e9b46c6d70db8a9ddf592dd6f2da91\" returns successfully" May 13 00:09:40.702772 kubelet[2452]: I0513 00:09:40.702709 2452 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-7d84469565-2x9zl" podStartSLOduration=23.951236929 podStartE2EDuration="27.702692348s" podCreationTimestamp="2025-05-13 00:09:13 +0000 UTC" firstStartedPulling="2025-05-13 00:09:35.449041696 +0000 UTC m=+34.638416296" lastFinishedPulling="2025-05-13 00:09:39.200497155 +0000 UTC m=+38.389871715" observedRunningTime="2025-05-13 00:09:40.121871076 +0000 UTC m=+39.311245676" watchObservedRunningTime="2025-05-13 00:09:40.702692348 +0000 UTC m=+39.892066948" May 13 00:09:40.892647 kubelet[2452]: I0513 00:09:40.892605 2452 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 13 00:09:40.967630 kubelet[2452]: I0513 00:09:40.967582 2452 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 May 13 00:09:40.971082 kubelet[2452]: I0513 00:09:40.971050 2452 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock May 13 00:09:41.117773 kubelet[2452]: I0513 00:09:41.117710 2452 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-bztmm" podStartSLOduration=23.092109992 podStartE2EDuration="28.117692339s" podCreationTimestamp="2025-05-13 00:09:13 +0000 UTC" firstStartedPulling="2025-05-13 00:09:35.240058157 +0000 UTC m=+34.429432757" lastFinishedPulling="2025-05-13 00:09:40.265640504 +0000 UTC m=+39.455015104" observedRunningTime="2025-05-13 00:09:41.116100409 +0000 UTC m=+40.305475009" watchObservedRunningTime="2025-05-13 00:09:41.117692339 +0000 UTC m=+40.307066939" May 13 00:09:42.510737 systemd[1]: Started sshd@9-10.0.0.16:22-10.0.0.1:60532.service - OpenSSH per-connection server daemon (10.0.0.1:60532). May 13 00:09:42.562317 sshd[5024]: Accepted publickey for core from 10.0.0.1 port 60532 ssh2: RSA SHA256:ilwLBGyeejLKSU0doRti0j2W4iQ88Tp+35jhkd0iwiU May 13 00:09:42.563706 sshd[5024]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:09:42.567352 systemd-logind[1419]: New session 10 of user core. May 13 00:09:42.574469 systemd[1]: Started session-10.scope - Session 10 of User core. May 13 00:09:42.646830 kubelet[2452]: I0513 00:09:42.646782 2452 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 13 00:09:42.647249 kubelet[2452]: E0513 00:09:42.647213 2452 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:09:42.819157 sshd[5024]: pam_unix(sshd:session): session closed for user core May 13 00:09:42.833016 systemd[1]: sshd@9-10.0.0.16:22-10.0.0.1:60532.service: Deactivated successfully. May 13 00:09:42.834622 systemd[1]: session-10.scope: Deactivated successfully. May 13 00:09:42.836044 systemd-logind[1419]: Session 10 logged out. Waiting for processes to exit. May 13 00:09:42.840551 systemd[1]: Started sshd@10-10.0.0.16:22-10.0.0.1:60548.service - OpenSSH per-connection server daemon (10.0.0.1:60548). May 13 00:09:42.841524 systemd-logind[1419]: Removed session 10. May 13 00:09:42.870699 sshd[5087]: Accepted publickey for core from 10.0.0.1 port 60548 ssh2: RSA SHA256:ilwLBGyeejLKSU0doRti0j2W4iQ88Tp+35jhkd0iwiU May 13 00:09:42.871953 sshd[5087]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:09:42.875674 systemd-logind[1419]: New session 11 of user core. May 13 00:09:42.885424 systemd[1]: Started session-11.scope - Session 11 of User core. May 13 00:09:43.079890 sshd[5087]: pam_unix(sshd:session): session closed for user core May 13 00:09:43.088874 systemd[1]: sshd@10-10.0.0.16:22-10.0.0.1:60548.service: Deactivated successfully. May 13 00:09:43.091496 systemd[1]: session-11.scope: Deactivated successfully. May 13 00:09:43.094222 systemd-logind[1419]: Session 11 logged out. Waiting for processes to exit. May 13 00:09:43.105106 systemd[1]: Started sshd@11-10.0.0.16:22-10.0.0.1:60552.service - OpenSSH per-connection server daemon (10.0.0.1:60552). May 13 00:09:43.109228 systemd-logind[1419]: Removed session 11. May 13 00:09:43.112886 kubelet[2452]: E0513 00:09:43.112400 2452 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:09:43.138026 sshd[5100]: Accepted publickey for core from 10.0.0.1 port 60552 ssh2: RSA SHA256:ilwLBGyeejLKSU0doRti0j2W4iQ88Tp+35jhkd0iwiU May 13 00:09:43.139393 sshd[5100]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:09:43.143605 systemd-logind[1419]: New session 12 of user core. May 13 00:09:43.149404 systemd[1]: Started session-12.scope - Session 12 of User core. May 13 00:09:43.293145 sshd[5100]: pam_unix(sshd:session): session closed for user core May 13 00:09:43.296551 systemd[1]: sshd@11-10.0.0.16:22-10.0.0.1:60552.service: Deactivated successfully. May 13 00:09:43.299546 systemd[1]: session-12.scope: Deactivated successfully. May 13 00:09:43.300530 systemd-logind[1419]: Session 12 logged out. Waiting for processes to exit. May 13 00:09:43.301411 systemd-logind[1419]: Removed session 12. May 13 00:09:48.304558 systemd[1]: Started sshd@12-10.0.0.16:22-10.0.0.1:60562.service - OpenSSH per-connection server daemon (10.0.0.1:60562). May 13 00:09:48.344781 sshd[5129]: Accepted publickey for core from 10.0.0.1 port 60562 ssh2: RSA SHA256:ilwLBGyeejLKSU0doRti0j2W4iQ88Tp+35jhkd0iwiU May 13 00:09:48.346326 sshd[5129]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:09:48.350446 systemd-logind[1419]: New session 13 of user core. May 13 00:09:48.361453 systemd[1]: Started session-13.scope - Session 13 of User core. May 13 00:09:48.498380 sshd[5129]: pam_unix(sshd:session): session closed for user core May 13 00:09:48.509987 systemd[1]: sshd@12-10.0.0.16:22-10.0.0.1:60562.service: Deactivated successfully. May 13 00:09:48.511635 systemd[1]: session-13.scope: Deactivated successfully. May 13 00:09:48.513001 systemd-logind[1419]: Session 13 logged out. Waiting for processes to exit. May 13 00:09:48.514349 systemd[1]: Started sshd@13-10.0.0.16:22-10.0.0.1:60572.service - OpenSSH per-connection server daemon (10.0.0.1:60572). May 13 00:09:48.515201 systemd-logind[1419]: Removed session 13. May 13 00:09:48.561443 sshd[5143]: Accepted publickey for core from 10.0.0.1 port 60572 ssh2: RSA SHA256:ilwLBGyeejLKSU0doRti0j2W4iQ88Tp+35jhkd0iwiU May 13 00:09:48.562935 sshd[5143]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:09:48.566851 systemd-logind[1419]: New session 14 of user core. May 13 00:09:48.578460 systemd[1]: Started session-14.scope - Session 14 of User core. May 13 00:09:48.829904 sshd[5143]: pam_unix(sshd:session): session closed for user core May 13 00:09:48.841885 systemd[1]: sshd@13-10.0.0.16:22-10.0.0.1:60572.service: Deactivated successfully. May 13 00:09:48.843548 systemd[1]: session-14.scope: Deactivated successfully. May 13 00:09:48.845113 systemd-logind[1419]: Session 14 logged out. Waiting for processes to exit. May 13 00:09:48.847533 systemd[1]: Started sshd@14-10.0.0.16:22-10.0.0.1:60578.service - OpenSSH per-connection server daemon (10.0.0.1:60578). May 13 00:09:48.848857 systemd-logind[1419]: Removed session 14. May 13 00:09:48.891415 sshd[5155]: Accepted publickey for core from 10.0.0.1 port 60578 ssh2: RSA SHA256:ilwLBGyeejLKSU0doRti0j2W4iQ88Tp+35jhkd0iwiU May 13 00:09:48.892701 sshd[5155]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:09:48.896736 systemd-logind[1419]: New session 15 of user core. May 13 00:09:48.908467 systemd[1]: Started session-15.scope - Session 15 of User core. May 13 00:09:50.422076 sshd[5155]: pam_unix(sshd:session): session closed for user core May 13 00:09:50.435689 systemd[1]: Started sshd@15-10.0.0.16:22-10.0.0.1:60594.service - OpenSSH per-connection server daemon (10.0.0.1:60594). May 13 00:09:50.436478 systemd[1]: sshd@14-10.0.0.16:22-10.0.0.1:60578.service: Deactivated successfully. May 13 00:09:50.440009 systemd[1]: session-15.scope: Deactivated successfully. May 13 00:09:50.444901 systemd-logind[1419]: Session 15 logged out. Waiting for processes to exit. May 13 00:09:50.450998 systemd-logind[1419]: Removed session 15. May 13 00:09:50.484422 sshd[5174]: Accepted publickey for core from 10.0.0.1 port 60594 ssh2: RSA SHA256:ilwLBGyeejLKSU0doRti0j2W4iQ88Tp+35jhkd0iwiU May 13 00:09:50.485895 sshd[5174]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:09:50.490123 systemd-logind[1419]: New session 16 of user core. May 13 00:09:50.497535 systemd[1]: Started session-16.scope - Session 16 of User core. May 13 00:09:50.826392 sshd[5174]: pam_unix(sshd:session): session closed for user core May 13 00:09:50.838339 systemd[1]: sshd@15-10.0.0.16:22-10.0.0.1:60594.service: Deactivated successfully. May 13 00:09:50.841063 systemd[1]: session-16.scope: Deactivated successfully. May 13 00:09:50.842875 systemd-logind[1419]: Session 16 logged out. Waiting for processes to exit. May 13 00:09:50.848690 systemd[1]: Started sshd@16-10.0.0.16:22-10.0.0.1:60608.service - OpenSSH per-connection server daemon (10.0.0.1:60608). May 13 00:09:50.851170 systemd-logind[1419]: Removed session 16. May 13 00:09:50.880084 sshd[5189]: Accepted publickey for core from 10.0.0.1 port 60608 ssh2: RSA SHA256:ilwLBGyeejLKSU0doRti0j2W4iQ88Tp+35jhkd0iwiU May 13 00:09:50.881888 sshd[5189]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:09:50.887309 systemd-logind[1419]: New session 17 of user core. May 13 00:09:50.896476 systemd[1]: Started session-17.scope - Session 17 of User core. May 13 00:09:51.062535 sshd[5189]: pam_unix(sshd:session): session closed for user core May 13 00:09:51.069785 systemd[1]: sshd@16-10.0.0.16:22-10.0.0.1:60608.service: Deactivated successfully. May 13 00:09:51.074798 systemd[1]: session-17.scope: Deactivated successfully. May 13 00:09:51.076002 systemd-logind[1419]: Session 17 logged out. Waiting for processes to exit. May 13 00:09:51.077960 systemd-logind[1419]: Removed session 17. May 13 00:09:56.078714 systemd[1]: Started sshd@17-10.0.0.16:22-10.0.0.1:33104.service - OpenSSH per-connection server daemon (10.0.0.1:33104). May 13 00:09:56.111446 sshd[5242]: Accepted publickey for core from 10.0.0.1 port 33104 ssh2: RSA SHA256:ilwLBGyeejLKSU0doRti0j2W4iQ88Tp+35jhkd0iwiU May 13 00:09:56.112878 sshd[5242]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:09:56.116966 systemd-logind[1419]: New session 18 of user core. May 13 00:09:56.130477 systemd[1]: Started session-18.scope - Session 18 of User core. May 13 00:09:56.262000 sshd[5242]: pam_unix(sshd:session): session closed for user core May 13 00:09:56.265402 systemd[1]: sshd@17-10.0.0.16:22-10.0.0.1:33104.service: Deactivated successfully. May 13 00:09:56.267395 systemd[1]: session-18.scope: Deactivated successfully. May 13 00:09:56.268020 systemd-logind[1419]: Session 18 logged out. Waiting for processes to exit. May 13 00:09:56.268943 systemd-logind[1419]: Removed session 18. May 13 00:10:00.888447 containerd[1437]: time="2025-05-13T00:10:00.888354368Z" level=info msg="StopPodSandbox for \"c69caa242909d2850bc8a227020e4d5c9a7fefe22f2b8af6f88d4b779938c57e\"" May 13 00:10:00.961952 containerd[1437]: 2025-05-13 00:10:00.923 [WARNING][5276] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c69caa242909d2850bc8a227020e4d5c9a7fefe22f2b8af6f88d4b779938c57e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--ztnr2-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"a77c4869-daeb-485e-a903-8ffb6cd275cb", ResourceVersion:"802", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 9, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7a0abb88800da24410e0a619a2696c8f5bc47d6523762a07112877a19afa2e1b", Pod:"coredns-6f6b679f8f-ztnr2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3413267b0eb", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:10:00.961952 containerd[1437]: 2025-05-13 00:10:00.923 [INFO][5276] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c69caa242909d2850bc8a227020e4d5c9a7fefe22f2b8af6f88d4b779938c57e" May 13 00:10:00.961952 containerd[1437]: 2025-05-13 00:10:00.923 [INFO][5276] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c69caa242909d2850bc8a227020e4d5c9a7fefe22f2b8af6f88d4b779938c57e" iface="eth0" netns="" May 13 00:10:00.961952 containerd[1437]: 2025-05-13 00:10:00.923 [INFO][5276] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c69caa242909d2850bc8a227020e4d5c9a7fefe22f2b8af6f88d4b779938c57e" May 13 00:10:00.961952 containerd[1437]: 2025-05-13 00:10:00.923 [INFO][5276] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c69caa242909d2850bc8a227020e4d5c9a7fefe22f2b8af6f88d4b779938c57e" May 13 00:10:00.961952 containerd[1437]: 2025-05-13 00:10:00.948 [INFO][5286] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c69caa242909d2850bc8a227020e4d5c9a7fefe22f2b8af6f88d4b779938c57e" HandleID="k8s-pod-network.c69caa242909d2850bc8a227020e4d5c9a7fefe22f2b8af6f88d4b779938c57e" Workload="localhost-k8s-coredns--6f6b679f8f--ztnr2-eth0" May 13 00:10:00.961952 containerd[1437]: 2025-05-13 00:10:00.948 [INFO][5286] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:10:00.961952 containerd[1437]: 2025-05-13 00:10:00.948 [INFO][5286] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:10:00.961952 containerd[1437]: 2025-05-13 00:10:00.957 [WARNING][5286] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c69caa242909d2850bc8a227020e4d5c9a7fefe22f2b8af6f88d4b779938c57e" HandleID="k8s-pod-network.c69caa242909d2850bc8a227020e4d5c9a7fefe22f2b8af6f88d4b779938c57e" Workload="localhost-k8s-coredns--6f6b679f8f--ztnr2-eth0" May 13 00:10:00.961952 containerd[1437]: 2025-05-13 00:10:00.957 [INFO][5286] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c69caa242909d2850bc8a227020e4d5c9a7fefe22f2b8af6f88d4b779938c57e" HandleID="k8s-pod-network.c69caa242909d2850bc8a227020e4d5c9a7fefe22f2b8af6f88d4b779938c57e" Workload="localhost-k8s-coredns--6f6b679f8f--ztnr2-eth0" May 13 00:10:00.961952 containerd[1437]: 2025-05-13 00:10:00.958 [INFO][5286] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:10:00.961952 containerd[1437]: 2025-05-13 00:10:00.960 [INFO][5276] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c69caa242909d2850bc8a227020e4d5c9a7fefe22f2b8af6f88d4b779938c57e" May 13 00:10:00.962794 containerd[1437]: time="2025-05-13T00:10:00.961991444Z" level=info msg="TearDown network for sandbox \"c69caa242909d2850bc8a227020e4d5c9a7fefe22f2b8af6f88d4b779938c57e\" successfully" May 13 00:10:00.962794 containerd[1437]: time="2025-05-13T00:10:00.962017885Z" level=info msg="StopPodSandbox for \"c69caa242909d2850bc8a227020e4d5c9a7fefe22f2b8af6f88d4b779938c57e\" returns successfully" May 13 00:10:00.963720 containerd[1437]: time="2025-05-13T00:10:00.963060850Z" level=info msg="RemovePodSandbox for \"c69caa242909d2850bc8a227020e4d5c9a7fefe22f2b8af6f88d4b779938c57e\"" May 13 00:10:00.967628 containerd[1437]: time="2025-05-13T00:10:00.967333350Z" level=info msg="Forcibly stopping sandbox \"c69caa242909d2850bc8a227020e4d5c9a7fefe22f2b8af6f88d4b779938c57e\"" May 13 00:10:01.036186 containerd[1437]: 2025-05-13 00:10:01.004 [WARNING][5310] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c69caa242909d2850bc8a227020e4d5c9a7fefe22f2b8af6f88d4b779938c57e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--ztnr2-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"a77c4869-daeb-485e-a903-8ffb6cd275cb", ResourceVersion:"802", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 9, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7a0abb88800da24410e0a619a2696c8f5bc47d6523762a07112877a19afa2e1b", Pod:"coredns-6f6b679f8f-ztnr2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3413267b0eb", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:10:01.036186 containerd[1437]: 2025-05-13 00:10:01.004 [INFO][5310] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c69caa242909d2850bc8a227020e4d5c9a7fefe22f2b8af6f88d4b779938c57e" May 13 00:10:01.036186 containerd[1437]: 2025-05-13 00:10:01.004 [INFO][5310] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c69caa242909d2850bc8a227020e4d5c9a7fefe22f2b8af6f88d4b779938c57e" iface="eth0" netns="" May 13 00:10:01.036186 containerd[1437]: 2025-05-13 00:10:01.004 [INFO][5310] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c69caa242909d2850bc8a227020e4d5c9a7fefe22f2b8af6f88d4b779938c57e" May 13 00:10:01.036186 containerd[1437]: 2025-05-13 00:10:01.004 [INFO][5310] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c69caa242909d2850bc8a227020e4d5c9a7fefe22f2b8af6f88d4b779938c57e" May 13 00:10:01.036186 containerd[1437]: 2025-05-13 00:10:01.023 [INFO][5318] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c69caa242909d2850bc8a227020e4d5c9a7fefe22f2b8af6f88d4b779938c57e" HandleID="k8s-pod-network.c69caa242909d2850bc8a227020e4d5c9a7fefe22f2b8af6f88d4b779938c57e" Workload="localhost-k8s-coredns--6f6b679f8f--ztnr2-eth0" May 13 00:10:01.036186 containerd[1437]: 2025-05-13 00:10:01.023 [INFO][5318] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:10:01.036186 containerd[1437]: 2025-05-13 00:10:01.023 [INFO][5318] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:10:01.036186 containerd[1437]: 2025-05-13 00:10:01.031 [WARNING][5318] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c69caa242909d2850bc8a227020e4d5c9a7fefe22f2b8af6f88d4b779938c57e" HandleID="k8s-pod-network.c69caa242909d2850bc8a227020e4d5c9a7fefe22f2b8af6f88d4b779938c57e" Workload="localhost-k8s-coredns--6f6b679f8f--ztnr2-eth0" May 13 00:10:01.036186 containerd[1437]: 2025-05-13 00:10:01.031 [INFO][5318] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c69caa242909d2850bc8a227020e4d5c9a7fefe22f2b8af6f88d4b779938c57e" HandleID="k8s-pod-network.c69caa242909d2850bc8a227020e4d5c9a7fefe22f2b8af6f88d4b779938c57e" Workload="localhost-k8s-coredns--6f6b679f8f--ztnr2-eth0" May 13 00:10:01.036186 containerd[1437]: 2025-05-13 00:10:01.033 [INFO][5318] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:10:01.036186 containerd[1437]: 2025-05-13 00:10:01.034 [INFO][5310] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c69caa242909d2850bc8a227020e4d5c9a7fefe22f2b8af6f88d4b779938c57e" May 13 00:10:01.036610 containerd[1437]: time="2025-05-13T00:10:01.036205282Z" level=info msg="TearDown network for sandbox \"c69caa242909d2850bc8a227020e4d5c9a7fefe22f2b8af6f88d4b779938c57e\" successfully" May 13 00:10:01.058180 containerd[1437]: time="2025-05-13T00:10:01.058128387Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c69caa242909d2850bc8a227020e4d5c9a7fefe22f2b8af6f88d4b779938c57e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 13 00:10:01.058320 containerd[1437]: time="2025-05-13T00:10:01.058209507Z" level=info msg="RemovePodSandbox \"c69caa242909d2850bc8a227020e4d5c9a7fefe22f2b8af6f88d4b779938c57e\" returns successfully" May 13 00:10:01.059055 containerd[1437]: time="2025-05-13T00:10:01.058808270Z" level=info msg="StopPodSandbox for \"57adbb0a67d29b0bf4ad1796865ac693e2cd7f97425c4bd1488202a63caa1a67\"" May 13 00:10:01.133299 containerd[1437]: 2025-05-13 00:10:01.092 [WARNING][5341] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="57adbb0a67d29b0bf4ad1796865ac693e2cd7f97425c4bd1488202a63caa1a67" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--bztmm-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f17de286-e9a7-4976-97e8-a35d9794c721", ResourceVersion:"942", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 9, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5bcd8f69", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5604677f3318435141028a4d59e601d9b189ad2734c64f287cf2535078821367", Pod:"csi-node-driver-bztmm", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali66ff05710cc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:10:01.133299 containerd[1437]: 2025-05-13 00:10:01.092 [INFO][5341] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="57adbb0a67d29b0bf4ad1796865ac693e2cd7f97425c4bd1488202a63caa1a67" May 13 00:10:01.133299 containerd[1437]: 2025-05-13 00:10:01.092 [INFO][5341] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="57adbb0a67d29b0bf4ad1796865ac693e2cd7f97425c4bd1488202a63caa1a67" iface="eth0" netns="" May 13 00:10:01.133299 containerd[1437]: 2025-05-13 00:10:01.092 [INFO][5341] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="57adbb0a67d29b0bf4ad1796865ac693e2cd7f97425c4bd1488202a63caa1a67" May 13 00:10:01.133299 containerd[1437]: 2025-05-13 00:10:01.092 [INFO][5341] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="57adbb0a67d29b0bf4ad1796865ac693e2cd7f97425c4bd1488202a63caa1a67" May 13 00:10:01.133299 containerd[1437]: 2025-05-13 00:10:01.120 [INFO][5351] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="57adbb0a67d29b0bf4ad1796865ac693e2cd7f97425c4bd1488202a63caa1a67" HandleID="k8s-pod-network.57adbb0a67d29b0bf4ad1796865ac693e2cd7f97425c4bd1488202a63caa1a67" Workload="localhost-k8s-csi--node--driver--bztmm-eth0" May 13 00:10:01.133299 containerd[1437]: 2025-05-13 00:10:01.120 [INFO][5351] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:10:01.133299 containerd[1437]: 2025-05-13 00:10:01.120 [INFO][5351] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:10:01.133299 containerd[1437]: 2025-05-13 00:10:01.128 [WARNING][5351] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="57adbb0a67d29b0bf4ad1796865ac693e2cd7f97425c4bd1488202a63caa1a67" HandleID="k8s-pod-network.57adbb0a67d29b0bf4ad1796865ac693e2cd7f97425c4bd1488202a63caa1a67" Workload="localhost-k8s-csi--node--driver--bztmm-eth0" May 13 00:10:01.133299 containerd[1437]: 2025-05-13 00:10:01.128 [INFO][5351] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="57adbb0a67d29b0bf4ad1796865ac693e2cd7f97425c4bd1488202a63caa1a67" HandleID="k8s-pod-network.57adbb0a67d29b0bf4ad1796865ac693e2cd7f97425c4bd1488202a63caa1a67" Workload="localhost-k8s-csi--node--driver--bztmm-eth0" May 13 00:10:01.133299 containerd[1437]: 2025-05-13 00:10:01.130 [INFO][5351] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:10:01.133299 containerd[1437]: 2025-05-13 00:10:01.131 [INFO][5341] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="57adbb0a67d29b0bf4ad1796865ac693e2cd7f97425c4bd1488202a63caa1a67" May 13 00:10:01.133897 containerd[1437]: time="2025-05-13T00:10:01.133749029Z" level=info msg="TearDown network for sandbox \"57adbb0a67d29b0bf4ad1796865ac693e2cd7f97425c4bd1488202a63caa1a67\" successfully" May 13 00:10:01.133897 containerd[1437]: time="2025-05-13T00:10:01.133780790Z" level=info msg="StopPodSandbox for \"57adbb0a67d29b0bf4ad1796865ac693e2cd7f97425c4bd1488202a63caa1a67\" returns successfully" May 13 00:10:01.134332 containerd[1437]: time="2025-05-13T00:10:01.134305952Z" level=info msg="RemovePodSandbox for \"57adbb0a67d29b0bf4ad1796865ac693e2cd7f97425c4bd1488202a63caa1a67\"" May 13 00:10:01.134426 containerd[1437]: time="2025-05-13T00:10:01.134341352Z" level=info msg="Forcibly stopping sandbox \"57adbb0a67d29b0bf4ad1796865ac693e2cd7f97425c4bd1488202a63caa1a67\"" May 13 00:10:01.200272 containerd[1437]: 2025-05-13 00:10:01.167 [WARNING][5374] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="57adbb0a67d29b0bf4ad1796865ac693e2cd7f97425c4bd1488202a63caa1a67" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--bztmm-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f17de286-e9a7-4976-97e8-a35d9794c721", ResourceVersion:"942", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 9, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5bcd8f69", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5604677f3318435141028a4d59e601d9b189ad2734c64f287cf2535078821367", Pod:"csi-node-driver-bztmm", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali66ff05710cc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:10:01.200272 containerd[1437]: 2025-05-13 00:10:01.167 [INFO][5374] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="57adbb0a67d29b0bf4ad1796865ac693e2cd7f97425c4bd1488202a63caa1a67" May 13 00:10:01.200272 containerd[1437]: 2025-05-13 00:10:01.167 [INFO][5374] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="57adbb0a67d29b0bf4ad1796865ac693e2cd7f97425c4bd1488202a63caa1a67" iface="eth0" netns="" May 13 00:10:01.200272 containerd[1437]: 2025-05-13 00:10:01.167 [INFO][5374] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="57adbb0a67d29b0bf4ad1796865ac693e2cd7f97425c4bd1488202a63caa1a67" May 13 00:10:01.200272 containerd[1437]: 2025-05-13 00:10:01.167 [INFO][5374] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="57adbb0a67d29b0bf4ad1796865ac693e2cd7f97425c4bd1488202a63caa1a67" May 13 00:10:01.200272 containerd[1437]: 2025-05-13 00:10:01.186 [INFO][5383] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="57adbb0a67d29b0bf4ad1796865ac693e2cd7f97425c4bd1488202a63caa1a67" HandleID="k8s-pod-network.57adbb0a67d29b0bf4ad1796865ac693e2cd7f97425c4bd1488202a63caa1a67" Workload="localhost-k8s-csi--node--driver--bztmm-eth0" May 13 00:10:01.200272 containerd[1437]: 2025-05-13 00:10:01.186 [INFO][5383] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:10:01.200272 containerd[1437]: 2025-05-13 00:10:01.186 [INFO][5383] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:10:01.200272 containerd[1437]: 2025-05-13 00:10:01.196 [WARNING][5383] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="57adbb0a67d29b0bf4ad1796865ac693e2cd7f97425c4bd1488202a63caa1a67" HandleID="k8s-pod-network.57adbb0a67d29b0bf4ad1796865ac693e2cd7f97425c4bd1488202a63caa1a67" Workload="localhost-k8s-csi--node--driver--bztmm-eth0" May 13 00:10:01.200272 containerd[1437]: 2025-05-13 00:10:01.196 [INFO][5383] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="57adbb0a67d29b0bf4ad1796865ac693e2cd7f97425c4bd1488202a63caa1a67" HandleID="k8s-pod-network.57adbb0a67d29b0bf4ad1796865ac693e2cd7f97425c4bd1488202a63caa1a67" Workload="localhost-k8s-csi--node--driver--bztmm-eth0" May 13 00:10:01.200272 containerd[1437]: 2025-05-13 00:10:01.197 [INFO][5383] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:10:01.200272 containerd[1437]: 2025-05-13 00:10:01.198 [INFO][5374] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="57adbb0a67d29b0bf4ad1796865ac693e2cd7f97425c4bd1488202a63caa1a67" May 13 00:10:01.200272 containerd[1437]: time="2025-05-13T00:10:01.200235868Z" level=info msg="TearDown network for sandbox \"57adbb0a67d29b0bf4ad1796865ac693e2cd7f97425c4bd1488202a63caa1a67\" successfully" May 13 00:10:01.203581 containerd[1437]: time="2025-05-13T00:10:01.203539604Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"57adbb0a67d29b0bf4ad1796865ac693e2cd7f97425c4bd1488202a63caa1a67\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 13 00:10:01.203657 containerd[1437]: time="2025-05-13T00:10:01.203601844Z" level=info msg="RemovePodSandbox \"57adbb0a67d29b0bf4ad1796865ac693e2cd7f97425c4bd1488202a63caa1a67\" returns successfully" May 13 00:10:01.204254 containerd[1437]: time="2025-05-13T00:10:01.204226247Z" level=info msg="StopPodSandbox for \"79bdb8a57c6bef7a8f2d085c160d7ec740cc530f2f3fc2579157d5a6ca0850d3\"" May 13 00:10:01.266578 containerd[1437]: 2025-05-13 00:10:01.236 [WARNING][5406] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="79bdb8a57c6bef7a8f2d085c160d7ec740cc530f2f3fc2579157d5a6ca0850d3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7d84469565--2x9zl-eth0", GenerateName:"calico-kube-controllers-7d84469565-", Namespace:"calico-system", SelfLink:"", UID:"9eb635b5-09fc-4daa-ae36-54b042b695f8", ResourceVersion:"944", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 9, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7d84469565", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d74e32b54893723dcd4feacbcff5d47af00ff6d7ef5c381d2a6ef61d31863415", Pod:"calico-kube-controllers-7d84469565-2x9zl", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib085c1c372e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:10:01.266578 containerd[1437]: 2025-05-13 00:10:01.237 [INFO][5406] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="79bdb8a57c6bef7a8f2d085c160d7ec740cc530f2f3fc2579157d5a6ca0850d3" May 13 00:10:01.266578 containerd[1437]: 2025-05-13 00:10:01.237 [INFO][5406] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="79bdb8a57c6bef7a8f2d085c160d7ec740cc530f2f3fc2579157d5a6ca0850d3" iface="eth0" netns="" May 13 00:10:01.266578 containerd[1437]: 2025-05-13 00:10:01.237 [INFO][5406] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="79bdb8a57c6bef7a8f2d085c160d7ec740cc530f2f3fc2579157d5a6ca0850d3" May 13 00:10:01.266578 containerd[1437]: 2025-05-13 00:10:01.237 [INFO][5406] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="79bdb8a57c6bef7a8f2d085c160d7ec740cc530f2f3fc2579157d5a6ca0850d3" May 13 00:10:01.266578 containerd[1437]: 2025-05-13 00:10:01.254 [INFO][5414] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="79bdb8a57c6bef7a8f2d085c160d7ec740cc530f2f3fc2579157d5a6ca0850d3" HandleID="k8s-pod-network.79bdb8a57c6bef7a8f2d085c160d7ec740cc530f2f3fc2579157d5a6ca0850d3" Workload="localhost-k8s-calico--kube--controllers--7d84469565--2x9zl-eth0" May 13 00:10:01.266578 containerd[1437]: 2025-05-13 00:10:01.254 [INFO][5414] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:10:01.266578 containerd[1437]: 2025-05-13 00:10:01.254 [INFO][5414] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:10:01.266578 containerd[1437]: 2025-05-13 00:10:01.262 [WARNING][5414] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="79bdb8a57c6bef7a8f2d085c160d7ec740cc530f2f3fc2579157d5a6ca0850d3" HandleID="k8s-pod-network.79bdb8a57c6bef7a8f2d085c160d7ec740cc530f2f3fc2579157d5a6ca0850d3" Workload="localhost-k8s-calico--kube--controllers--7d84469565--2x9zl-eth0" May 13 00:10:01.266578 containerd[1437]: 2025-05-13 00:10:01.262 [INFO][5414] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="79bdb8a57c6bef7a8f2d085c160d7ec740cc530f2f3fc2579157d5a6ca0850d3" HandleID="k8s-pod-network.79bdb8a57c6bef7a8f2d085c160d7ec740cc530f2f3fc2579157d5a6ca0850d3" Workload="localhost-k8s-calico--kube--controllers--7d84469565--2x9zl-eth0" May 13 00:10:01.266578 containerd[1437]: 2025-05-13 00:10:01.264 [INFO][5414] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:10:01.266578 containerd[1437]: 2025-05-13 00:10:01.265 [INFO][5406] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="79bdb8a57c6bef7a8f2d085c160d7ec740cc530f2f3fc2579157d5a6ca0850d3" May 13 00:10:01.266999 containerd[1437]: time="2025-05-13T00:10:01.266615186Z" level=info msg="TearDown network for sandbox \"79bdb8a57c6bef7a8f2d085c160d7ec740cc530f2f3fc2579157d5a6ca0850d3\" successfully" May 13 00:10:01.266999 containerd[1437]: time="2025-05-13T00:10:01.266639786Z" level=info msg="StopPodSandbox for \"79bdb8a57c6bef7a8f2d085c160d7ec740cc530f2f3fc2579157d5a6ca0850d3\" returns successfully" May 13 00:10:01.267210 containerd[1437]: time="2025-05-13T00:10:01.267189349Z" level=info msg="RemovePodSandbox for \"79bdb8a57c6bef7a8f2d085c160d7ec740cc530f2f3fc2579157d5a6ca0850d3\"" May 13 00:10:01.267240 containerd[1437]: time="2025-05-13T00:10:01.267222589Z" level=info msg="Forcibly stopping sandbox \"79bdb8a57c6bef7a8f2d085c160d7ec740cc530f2f3fc2579157d5a6ca0850d3\"" May 13 00:10:01.280520 systemd[1]: Started sshd@18-10.0.0.16:22-10.0.0.1:33112.service - OpenSSH per-connection server daemon (10.0.0.1:33112). May 13 00:10:01.324037 sshd[5442]: Accepted publickey for core from 10.0.0.1 port 33112 ssh2: RSA SHA256:ilwLBGyeejLKSU0doRti0j2W4iQ88Tp+35jhkd0iwiU May 13 00:10:01.325796 sshd[5442]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:10:01.332464 systemd-logind[1419]: New session 19 of user core. May 13 00:10:01.337453 systemd[1]: Started session-19.scope - Session 19 of User core. May 13 00:10:01.339466 containerd[1437]: 2025-05-13 00:10:01.303 [WARNING][5437] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="79bdb8a57c6bef7a8f2d085c160d7ec740cc530f2f3fc2579157d5a6ca0850d3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7d84469565--2x9zl-eth0", GenerateName:"calico-kube-controllers-7d84469565-", Namespace:"calico-system", SelfLink:"", UID:"9eb635b5-09fc-4daa-ae36-54b042b695f8", ResourceVersion:"944", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 9, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7d84469565", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d74e32b54893723dcd4feacbcff5d47af00ff6d7ef5c381d2a6ef61d31863415", Pod:"calico-kube-controllers-7d84469565-2x9zl", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib085c1c372e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:10:01.339466 containerd[1437]: 2025-05-13 00:10:01.303 [INFO][5437] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="79bdb8a57c6bef7a8f2d085c160d7ec740cc530f2f3fc2579157d5a6ca0850d3" May 13 00:10:01.339466 containerd[1437]: 2025-05-13 00:10:01.304 [INFO][5437] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="79bdb8a57c6bef7a8f2d085c160d7ec740cc530f2f3fc2579157d5a6ca0850d3" iface="eth0" netns="" May 13 00:10:01.339466 containerd[1437]: 2025-05-13 00:10:01.304 [INFO][5437] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="79bdb8a57c6bef7a8f2d085c160d7ec740cc530f2f3fc2579157d5a6ca0850d3" May 13 00:10:01.339466 containerd[1437]: 2025-05-13 00:10:01.304 [INFO][5437] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="79bdb8a57c6bef7a8f2d085c160d7ec740cc530f2f3fc2579157d5a6ca0850d3" May 13 00:10:01.339466 containerd[1437]: 2025-05-13 00:10:01.323 [INFO][5447] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="79bdb8a57c6bef7a8f2d085c160d7ec740cc530f2f3fc2579157d5a6ca0850d3" HandleID="k8s-pod-network.79bdb8a57c6bef7a8f2d085c160d7ec740cc530f2f3fc2579157d5a6ca0850d3" Workload="localhost-k8s-calico--kube--controllers--7d84469565--2x9zl-eth0" May 13 00:10:01.339466 containerd[1437]: 2025-05-13 00:10:01.324 [INFO][5447] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:10:01.339466 containerd[1437]: 2025-05-13 00:10:01.324 [INFO][5447] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:10:01.339466 containerd[1437]: 2025-05-13 00:10:01.334 [WARNING][5447] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="79bdb8a57c6bef7a8f2d085c160d7ec740cc530f2f3fc2579157d5a6ca0850d3" HandleID="k8s-pod-network.79bdb8a57c6bef7a8f2d085c160d7ec740cc530f2f3fc2579157d5a6ca0850d3" Workload="localhost-k8s-calico--kube--controllers--7d84469565--2x9zl-eth0" May 13 00:10:01.339466 containerd[1437]: 2025-05-13 00:10:01.334 [INFO][5447] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="79bdb8a57c6bef7a8f2d085c160d7ec740cc530f2f3fc2579157d5a6ca0850d3" HandleID="k8s-pod-network.79bdb8a57c6bef7a8f2d085c160d7ec740cc530f2f3fc2579157d5a6ca0850d3" Workload="localhost-k8s-calico--kube--controllers--7d84469565--2x9zl-eth0" May 13 00:10:01.339466 containerd[1437]: 2025-05-13 00:10:01.335 [INFO][5447] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:10:01.339466 containerd[1437]: 2025-05-13 00:10:01.337 [INFO][5437] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="79bdb8a57c6bef7a8f2d085c160d7ec740cc530f2f3fc2579157d5a6ca0850d3" May 13 00:10:01.340514 containerd[1437]: time="2025-05-13T00:10:01.339654456Z" level=info msg="TearDown network for sandbox \"79bdb8a57c6bef7a8f2d085c160d7ec740cc530f2f3fc2579157d5a6ca0850d3\" successfully" May 13 00:10:01.347162 containerd[1437]: time="2025-05-13T00:10:01.347115292Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"79bdb8a57c6bef7a8f2d085c160d7ec740cc530f2f3fc2579157d5a6ca0850d3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 13 00:10:01.347276 containerd[1437]: time="2025-05-13T00:10:01.347187052Z" level=info msg="RemovePodSandbox \"79bdb8a57c6bef7a8f2d085c160d7ec740cc530f2f3fc2579157d5a6ca0850d3\" returns successfully" May 13 00:10:01.347665 containerd[1437]: time="2025-05-13T00:10:01.347641575Z" level=info msg="StopPodSandbox for \"b40f59446a88db83e41c19037c6d6c91c3c1951d19165cc54e1343407c9994b8\"" May 13 00:10:01.415739 containerd[1437]: 2025-05-13 00:10:01.380 [WARNING][5471] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b40f59446a88db83e41c19037c6d6c91c3c1951d19165cc54e1343407c9994b8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--dsktj-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"efa396e0-d0f8-4e2e-92a4-a9b755e3f9de", ResourceVersion:"882", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 9, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"14e7adcb2ec16def4d8da3efa377df8f7f4e4fd206fd16c4d2fc6e42259f3789", Pod:"coredns-6f6b679f8f-dsktj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic760c87c48c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:10:01.415739 containerd[1437]: 2025-05-13 00:10:01.380 [INFO][5471] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b40f59446a88db83e41c19037c6d6c91c3c1951d19165cc54e1343407c9994b8" May 13 00:10:01.415739 containerd[1437]: 2025-05-13 00:10:01.380 [INFO][5471] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b40f59446a88db83e41c19037c6d6c91c3c1951d19165cc54e1343407c9994b8" iface="eth0" netns="" May 13 00:10:01.415739 containerd[1437]: 2025-05-13 00:10:01.380 [INFO][5471] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b40f59446a88db83e41c19037c6d6c91c3c1951d19165cc54e1343407c9994b8" May 13 00:10:01.415739 containerd[1437]: 2025-05-13 00:10:01.380 [INFO][5471] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b40f59446a88db83e41c19037c6d6c91c3c1951d19165cc54e1343407c9994b8" May 13 00:10:01.415739 containerd[1437]: 2025-05-13 00:10:01.402 [INFO][5479] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b40f59446a88db83e41c19037c6d6c91c3c1951d19165cc54e1343407c9994b8" HandleID="k8s-pod-network.b40f59446a88db83e41c19037c6d6c91c3c1951d19165cc54e1343407c9994b8" Workload="localhost-k8s-coredns--6f6b679f8f--dsktj-eth0" May 13 00:10:01.415739 containerd[1437]: 2025-05-13 00:10:01.402 [INFO][5479] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:10:01.415739 containerd[1437]: 2025-05-13 00:10:01.402 [INFO][5479] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:10:01.415739 containerd[1437]: 2025-05-13 00:10:01.411 [WARNING][5479] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b40f59446a88db83e41c19037c6d6c91c3c1951d19165cc54e1343407c9994b8" HandleID="k8s-pod-network.b40f59446a88db83e41c19037c6d6c91c3c1951d19165cc54e1343407c9994b8" Workload="localhost-k8s-coredns--6f6b679f8f--dsktj-eth0" May 13 00:10:01.415739 containerd[1437]: 2025-05-13 00:10:01.411 [INFO][5479] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b40f59446a88db83e41c19037c6d6c91c3c1951d19165cc54e1343407c9994b8" HandleID="k8s-pod-network.b40f59446a88db83e41c19037c6d6c91c3c1951d19165cc54e1343407c9994b8" Workload="localhost-k8s-coredns--6f6b679f8f--dsktj-eth0" May 13 00:10:01.415739 containerd[1437]: 2025-05-13 00:10:01.412 [INFO][5479] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:10:01.415739 containerd[1437]: 2025-05-13 00:10:01.414 [INFO][5471] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b40f59446a88db83e41c19037c6d6c91c3c1951d19165cc54e1343407c9994b8" May 13 00:10:01.416134 containerd[1437]: time="2025-05-13T00:10:01.415780661Z" level=info msg="TearDown network for sandbox \"b40f59446a88db83e41c19037c6d6c91c3c1951d19165cc54e1343407c9994b8\" successfully" May 13 00:10:01.416134 containerd[1437]: time="2025-05-13T00:10:01.415806741Z" level=info msg="StopPodSandbox for \"b40f59446a88db83e41c19037c6d6c91c3c1951d19165cc54e1343407c9994b8\" returns successfully" May 13 00:10:01.416356 containerd[1437]: time="2025-05-13T00:10:01.416331384Z" level=info msg="RemovePodSandbox for \"b40f59446a88db83e41c19037c6d6c91c3c1951d19165cc54e1343407c9994b8\"" May 13 00:10:01.416404 containerd[1437]: time="2025-05-13T00:10:01.416367104Z" level=info msg="Forcibly stopping sandbox \"b40f59446a88db83e41c19037c6d6c91c3c1951d19165cc54e1343407c9994b8\"" May 13 00:10:01.464893 sshd[5442]: pam_unix(sshd:session): session closed for user core May 13 00:10:01.469209 systemd[1]: sshd@18-10.0.0.16:22-10.0.0.1:33112.service: Deactivated successfully. May 13 00:10:01.473897 systemd[1]: session-19.scope: Deactivated successfully. May 13 00:10:01.474859 systemd-logind[1419]: Session 19 logged out. Waiting for processes to exit. May 13 00:10:01.476416 systemd-logind[1419]: Removed session 19. May 13 00:10:01.493483 containerd[1437]: 2025-05-13 00:10:01.455 [WARNING][5510] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b40f59446a88db83e41c19037c6d6c91c3c1951d19165cc54e1343407c9994b8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--dsktj-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"efa396e0-d0f8-4e2e-92a4-a9b755e3f9de", ResourceVersion:"882", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 9, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"14e7adcb2ec16def4d8da3efa377df8f7f4e4fd206fd16c4d2fc6e42259f3789", Pod:"coredns-6f6b679f8f-dsktj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic760c87c48c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:10:01.493483 containerd[1437]: 2025-05-13 00:10:01.456 [INFO][5510] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b40f59446a88db83e41c19037c6d6c91c3c1951d19165cc54e1343407c9994b8" May 13 00:10:01.493483 containerd[1437]: 2025-05-13 00:10:01.456 [INFO][5510] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b40f59446a88db83e41c19037c6d6c91c3c1951d19165cc54e1343407c9994b8" iface="eth0" netns="" May 13 00:10:01.493483 containerd[1437]: 2025-05-13 00:10:01.456 [INFO][5510] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b40f59446a88db83e41c19037c6d6c91c3c1951d19165cc54e1343407c9994b8" May 13 00:10:01.493483 containerd[1437]: 2025-05-13 00:10:01.456 [INFO][5510] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b40f59446a88db83e41c19037c6d6c91c3c1951d19165cc54e1343407c9994b8" May 13 00:10:01.493483 containerd[1437]: 2025-05-13 00:10:01.475 [INFO][5518] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b40f59446a88db83e41c19037c6d6c91c3c1951d19165cc54e1343407c9994b8" HandleID="k8s-pod-network.b40f59446a88db83e41c19037c6d6c91c3c1951d19165cc54e1343407c9994b8" Workload="localhost-k8s-coredns--6f6b679f8f--dsktj-eth0" May 13 00:10:01.493483 containerd[1437]: 2025-05-13 00:10:01.475 [INFO][5518] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:10:01.493483 containerd[1437]: 2025-05-13 00:10:01.475 [INFO][5518] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:10:01.493483 containerd[1437]: 2025-05-13 00:10:01.488 [WARNING][5518] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b40f59446a88db83e41c19037c6d6c91c3c1951d19165cc54e1343407c9994b8" HandleID="k8s-pod-network.b40f59446a88db83e41c19037c6d6c91c3c1951d19165cc54e1343407c9994b8" Workload="localhost-k8s-coredns--6f6b679f8f--dsktj-eth0" May 13 00:10:01.493483 containerd[1437]: 2025-05-13 00:10:01.488 [INFO][5518] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b40f59446a88db83e41c19037c6d6c91c3c1951d19165cc54e1343407c9994b8" HandleID="k8s-pod-network.b40f59446a88db83e41c19037c6d6c91c3c1951d19165cc54e1343407c9994b8" Workload="localhost-k8s-coredns--6f6b679f8f--dsktj-eth0" May 13 00:10:01.493483 containerd[1437]: 2025-05-13 00:10:01.490 [INFO][5518] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:10:01.493483 containerd[1437]: 2025-05-13 00:10:01.492 [INFO][5510] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b40f59446a88db83e41c19037c6d6c91c3c1951d19165cc54e1343407c9994b8" May 13 00:10:01.493856 containerd[1437]: time="2025-05-13T00:10:01.493525834Z" level=info msg="TearDown network for sandbox \"b40f59446a88db83e41c19037c6d6c91c3c1951d19165cc54e1343407c9994b8\" successfully" May 13 00:10:01.498043 containerd[1437]: time="2025-05-13T00:10:01.497976815Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b40f59446a88db83e41c19037c6d6c91c3c1951d19165cc54e1343407c9994b8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 13 00:10:01.498121 containerd[1437]: time="2025-05-13T00:10:01.498094856Z" level=info msg="RemovePodSandbox \"b40f59446a88db83e41c19037c6d6c91c3c1951d19165cc54e1343407c9994b8\" returns successfully" May 13 00:10:01.498554 containerd[1437]: time="2025-05-13T00:10:01.498528538Z" level=info msg="StopPodSandbox for \"4c4c90edb0ca19b850c99e21650b813f41c7c951092eac6de249c3c6cbbe200d\"" May 13 00:10:01.562553 containerd[1437]: 2025-05-13 00:10:01.531 [WARNING][5543] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4c4c90edb0ca19b850c99e21650b813f41c7c951092eac6de249c3c6cbbe200d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--78445c69b9--wbfwl-eth0", GenerateName:"calico-apiserver-78445c69b9-", Namespace:"calico-apiserver", SelfLink:"", UID:"abf96a25-9422-4227-9da0-78c6d9df4e1e", ResourceVersion:"933", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 9, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"78445c69b9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d67a38149d30d385f86a8f1a8138837fae957162ebd103a74a2d08b870c759f8", Pod:"calico-apiserver-78445c69b9-wbfwl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic0cf0c8d8df", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:10:01.562553 containerd[1437]: 2025-05-13 00:10:01.531 [INFO][5543] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="4c4c90edb0ca19b850c99e21650b813f41c7c951092eac6de249c3c6cbbe200d" May 13 00:10:01.562553 containerd[1437]: 2025-05-13 00:10:01.531 [INFO][5543] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4c4c90edb0ca19b850c99e21650b813f41c7c951092eac6de249c3c6cbbe200d" iface="eth0" netns="" May 13 00:10:01.562553 containerd[1437]: 2025-05-13 00:10:01.531 [INFO][5543] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="4c4c90edb0ca19b850c99e21650b813f41c7c951092eac6de249c3c6cbbe200d" May 13 00:10:01.562553 containerd[1437]: 2025-05-13 00:10:01.531 [INFO][5543] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4c4c90edb0ca19b850c99e21650b813f41c7c951092eac6de249c3c6cbbe200d" May 13 00:10:01.562553 containerd[1437]: 2025-05-13 00:10:01.549 [INFO][5552] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4c4c90edb0ca19b850c99e21650b813f41c7c951092eac6de249c3c6cbbe200d" HandleID="k8s-pod-network.4c4c90edb0ca19b850c99e21650b813f41c7c951092eac6de249c3c6cbbe200d" Workload="localhost-k8s-calico--apiserver--78445c69b9--wbfwl-eth0" May 13 00:10:01.562553 containerd[1437]: 2025-05-13 00:10:01.549 [INFO][5552] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:10:01.562553 containerd[1437]: 2025-05-13 00:10:01.549 [INFO][5552] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:10:01.562553 containerd[1437]: 2025-05-13 00:10:01.558 [WARNING][5552] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4c4c90edb0ca19b850c99e21650b813f41c7c951092eac6de249c3c6cbbe200d" HandleID="k8s-pod-network.4c4c90edb0ca19b850c99e21650b813f41c7c951092eac6de249c3c6cbbe200d" Workload="localhost-k8s-calico--apiserver--78445c69b9--wbfwl-eth0" May 13 00:10:01.562553 containerd[1437]: 2025-05-13 00:10:01.558 [INFO][5552] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4c4c90edb0ca19b850c99e21650b813f41c7c951092eac6de249c3c6cbbe200d" HandleID="k8s-pod-network.4c4c90edb0ca19b850c99e21650b813f41c7c951092eac6de249c3c6cbbe200d" Workload="localhost-k8s-calico--apiserver--78445c69b9--wbfwl-eth0" May 13 00:10:01.562553 containerd[1437]: 2025-05-13 00:10:01.559 [INFO][5552] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:10:01.562553 containerd[1437]: 2025-05-13 00:10:01.561 [INFO][5543] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="4c4c90edb0ca19b850c99e21650b813f41c7c951092eac6de249c3c6cbbe200d" May 13 00:10:01.562941 containerd[1437]: time="2025-05-13T00:10:01.562591685Z" level=info msg="TearDown network for sandbox \"4c4c90edb0ca19b850c99e21650b813f41c7c951092eac6de249c3c6cbbe200d\" successfully" May 13 00:10:01.562941 containerd[1437]: time="2025-05-13T00:10:01.562619565Z" level=info msg="StopPodSandbox for \"4c4c90edb0ca19b850c99e21650b813f41c7c951092eac6de249c3c6cbbe200d\" returns successfully" May 13 00:10:01.563221 containerd[1437]: time="2025-05-13T00:10:01.563169088Z" level=info msg="RemovePodSandbox for \"4c4c90edb0ca19b850c99e21650b813f41c7c951092eac6de249c3c6cbbe200d\"" May 13 00:10:01.563278 containerd[1437]: time="2025-05-13T00:10:01.563225448Z" level=info msg="Forcibly stopping sandbox \"4c4c90edb0ca19b850c99e21650b813f41c7c951092eac6de249c3c6cbbe200d\"" May 13 00:10:01.635617 containerd[1437]: 2025-05-13 00:10:01.596 [WARNING][5575] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4c4c90edb0ca19b850c99e21650b813f41c7c951092eac6de249c3c6cbbe200d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--78445c69b9--wbfwl-eth0", GenerateName:"calico-apiserver-78445c69b9-", Namespace:"calico-apiserver", SelfLink:"", UID:"abf96a25-9422-4227-9da0-78c6d9df4e1e", ResourceVersion:"933", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 9, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"78445c69b9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d67a38149d30d385f86a8f1a8138837fae957162ebd103a74a2d08b870c759f8", Pod:"calico-apiserver-78445c69b9-wbfwl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic0cf0c8d8df", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:10:01.635617 containerd[1437]: 2025-05-13 00:10:01.596 [INFO][5575] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="4c4c90edb0ca19b850c99e21650b813f41c7c951092eac6de249c3c6cbbe200d" May 13 00:10:01.635617 containerd[1437]: 2025-05-13 00:10:01.596 [INFO][5575] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4c4c90edb0ca19b850c99e21650b813f41c7c951092eac6de249c3c6cbbe200d" iface="eth0" netns="" May 13 00:10:01.635617 containerd[1437]: 2025-05-13 00:10:01.596 [INFO][5575] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="4c4c90edb0ca19b850c99e21650b813f41c7c951092eac6de249c3c6cbbe200d" May 13 00:10:01.635617 containerd[1437]: 2025-05-13 00:10:01.596 [INFO][5575] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4c4c90edb0ca19b850c99e21650b813f41c7c951092eac6de249c3c6cbbe200d" May 13 00:10:01.635617 containerd[1437]: 2025-05-13 00:10:01.622 [INFO][5584] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4c4c90edb0ca19b850c99e21650b813f41c7c951092eac6de249c3c6cbbe200d" HandleID="k8s-pod-network.4c4c90edb0ca19b850c99e21650b813f41c7c951092eac6de249c3c6cbbe200d" Workload="localhost-k8s-calico--apiserver--78445c69b9--wbfwl-eth0" May 13 00:10:01.635617 containerd[1437]: 2025-05-13 00:10:01.622 [INFO][5584] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:10:01.635617 containerd[1437]: 2025-05-13 00:10:01.622 [INFO][5584] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:10:01.635617 containerd[1437]: 2025-05-13 00:10:01.631 [WARNING][5584] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4c4c90edb0ca19b850c99e21650b813f41c7c951092eac6de249c3c6cbbe200d" HandleID="k8s-pod-network.4c4c90edb0ca19b850c99e21650b813f41c7c951092eac6de249c3c6cbbe200d" Workload="localhost-k8s-calico--apiserver--78445c69b9--wbfwl-eth0" May 13 00:10:01.635617 containerd[1437]: 2025-05-13 00:10:01.631 [INFO][5584] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4c4c90edb0ca19b850c99e21650b813f41c7c951092eac6de249c3c6cbbe200d" HandleID="k8s-pod-network.4c4c90edb0ca19b850c99e21650b813f41c7c951092eac6de249c3c6cbbe200d" Workload="localhost-k8s-calico--apiserver--78445c69b9--wbfwl-eth0" May 13 00:10:01.635617 containerd[1437]: 2025-05-13 00:10:01.632 [INFO][5584] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:10:01.635617 containerd[1437]: 2025-05-13 00:10:01.634 [INFO][5575] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="4c4c90edb0ca19b850c99e21650b813f41c7c951092eac6de249c3c6cbbe200d" May 13 00:10:01.636037 containerd[1437]: time="2025-05-13T00:10:01.635652155Z" level=info msg="TearDown network for sandbox \"4c4c90edb0ca19b850c99e21650b813f41c7c951092eac6de249c3c6cbbe200d\" successfully" May 13 00:10:01.639084 containerd[1437]: time="2025-05-13T00:10:01.638315768Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4c4c90edb0ca19b850c99e21650b813f41c7c951092eac6de249c3c6cbbe200d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 13 00:10:01.639084 containerd[1437]: time="2025-05-13T00:10:01.638376928Z" level=info msg="RemovePodSandbox \"4c4c90edb0ca19b850c99e21650b813f41c7c951092eac6de249c3c6cbbe200d\" returns successfully" May 13 00:10:01.639084 containerd[1437]: time="2025-05-13T00:10:01.638990771Z" level=info msg="StopPodSandbox for \"e8678be89270303df5c1f3c49079f4fd9cc7c27e6bc86107eaa1c2731c904c11\"" May 13 00:10:01.705423 containerd[1437]: 2025-05-13 00:10:01.671 [WARNING][5606] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e8678be89270303df5c1f3c49079f4fd9cc7c27e6bc86107eaa1c2731c904c11" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--78445c69b9--vz5v9-eth0", GenerateName:"calico-apiserver-78445c69b9-", Namespace:"calico-apiserver", SelfLink:"", UID:"9a6f6bd6-2495-464c-927f-4c4a69a0b8bc", ResourceVersion:"926", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 9, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"78445c69b9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9d8f6bda1c259dab3ec6dff4292e4e3a8aa7747ecccf2b8a75a76a504d1b72a9", Pod:"calico-apiserver-78445c69b9-vz5v9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid53d91d7555", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:10:01.705423 containerd[1437]: 2025-05-13 00:10:01.672 [INFO][5606] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e8678be89270303df5c1f3c49079f4fd9cc7c27e6bc86107eaa1c2731c904c11" May 13 00:10:01.705423 containerd[1437]: 2025-05-13 00:10:01.672 [INFO][5606] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e8678be89270303df5c1f3c49079f4fd9cc7c27e6bc86107eaa1c2731c904c11" iface="eth0" netns="" May 13 00:10:01.705423 containerd[1437]: 2025-05-13 00:10:01.672 [INFO][5606] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e8678be89270303df5c1f3c49079f4fd9cc7c27e6bc86107eaa1c2731c904c11" May 13 00:10:01.705423 containerd[1437]: 2025-05-13 00:10:01.672 [INFO][5606] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e8678be89270303df5c1f3c49079f4fd9cc7c27e6bc86107eaa1c2731c904c11" May 13 00:10:01.705423 containerd[1437]: 2025-05-13 00:10:01.690 [INFO][5614] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e8678be89270303df5c1f3c49079f4fd9cc7c27e6bc86107eaa1c2731c904c11" HandleID="k8s-pod-network.e8678be89270303df5c1f3c49079f4fd9cc7c27e6bc86107eaa1c2731c904c11" Workload="localhost-k8s-calico--apiserver--78445c69b9--vz5v9-eth0" May 13 00:10:01.705423 containerd[1437]: 2025-05-13 00:10:01.690 [INFO][5614] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:10:01.705423 containerd[1437]: 2025-05-13 00:10:01.690 [INFO][5614] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:10:01.705423 containerd[1437]: 2025-05-13 00:10:01.699 [WARNING][5614] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e8678be89270303df5c1f3c49079f4fd9cc7c27e6bc86107eaa1c2731c904c11" HandleID="k8s-pod-network.e8678be89270303df5c1f3c49079f4fd9cc7c27e6bc86107eaa1c2731c904c11" Workload="localhost-k8s-calico--apiserver--78445c69b9--vz5v9-eth0" May 13 00:10:01.705423 containerd[1437]: 2025-05-13 00:10:01.699 [INFO][5614] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e8678be89270303df5c1f3c49079f4fd9cc7c27e6bc86107eaa1c2731c904c11" HandleID="k8s-pod-network.e8678be89270303df5c1f3c49079f4fd9cc7c27e6bc86107eaa1c2731c904c11" Workload="localhost-k8s-calico--apiserver--78445c69b9--vz5v9-eth0" May 13 00:10:01.705423 containerd[1437]: 2025-05-13 00:10:01.702 [INFO][5614] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:10:01.705423 containerd[1437]: 2025-05-13 00:10:01.703 [INFO][5606] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e8678be89270303df5c1f3c49079f4fd9cc7c27e6bc86107eaa1c2731c904c11" May 13 00:10:01.705811 containerd[1437]: time="2025-05-13T00:10:01.705470530Z" level=info msg="TearDown network for sandbox \"e8678be89270303df5c1f3c49079f4fd9cc7c27e6bc86107eaa1c2731c904c11\" successfully" May 13 00:10:01.712920 containerd[1437]: time="2025-05-13T00:10:01.705495170Z" level=info msg="StopPodSandbox for \"e8678be89270303df5c1f3c49079f4fd9cc7c27e6bc86107eaa1c2731c904c11\" returns successfully" May 13 00:10:01.713504 containerd[1437]: time="2025-05-13T00:10:01.713477208Z" level=info msg="RemovePodSandbox for \"e8678be89270303df5c1f3c49079f4fd9cc7c27e6bc86107eaa1c2731c904c11\"" May 13 00:10:01.713571 containerd[1437]: time="2025-05-13T00:10:01.713512568Z" level=info msg="Forcibly stopping sandbox \"e8678be89270303df5c1f3c49079f4fd9cc7c27e6bc86107eaa1c2731c904c11\"" May 13 00:10:01.777012 containerd[1437]: 2025-05-13 00:10:01.746 [WARNING][5638] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e8678be89270303df5c1f3c49079f4fd9cc7c27e6bc86107eaa1c2731c904c11" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--78445c69b9--vz5v9-eth0", GenerateName:"calico-apiserver-78445c69b9-", Namespace:"calico-apiserver", SelfLink:"", UID:"9a6f6bd6-2495-464c-927f-4c4a69a0b8bc", ResourceVersion:"926", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 9, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"78445c69b9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9d8f6bda1c259dab3ec6dff4292e4e3a8aa7747ecccf2b8a75a76a504d1b72a9", Pod:"calico-apiserver-78445c69b9-vz5v9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid53d91d7555", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:10:01.777012 containerd[1437]: 2025-05-13 00:10:01.746 [INFO][5638] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e8678be89270303df5c1f3c49079f4fd9cc7c27e6bc86107eaa1c2731c904c11" May 13 00:10:01.777012 containerd[1437]: 2025-05-13 00:10:01.746 [INFO][5638] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e8678be89270303df5c1f3c49079f4fd9cc7c27e6bc86107eaa1c2731c904c11" iface="eth0" netns="" May 13 00:10:01.777012 containerd[1437]: 2025-05-13 00:10:01.746 [INFO][5638] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e8678be89270303df5c1f3c49079f4fd9cc7c27e6bc86107eaa1c2731c904c11" May 13 00:10:01.777012 containerd[1437]: 2025-05-13 00:10:01.746 [INFO][5638] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e8678be89270303df5c1f3c49079f4fd9cc7c27e6bc86107eaa1c2731c904c11" May 13 00:10:01.777012 containerd[1437]: 2025-05-13 00:10:01.764 [INFO][5646] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e8678be89270303df5c1f3c49079f4fd9cc7c27e6bc86107eaa1c2731c904c11" HandleID="k8s-pod-network.e8678be89270303df5c1f3c49079f4fd9cc7c27e6bc86107eaa1c2731c904c11" Workload="localhost-k8s-calico--apiserver--78445c69b9--vz5v9-eth0" May 13 00:10:01.777012 containerd[1437]: 2025-05-13 00:10:01.764 [INFO][5646] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:10:01.777012 containerd[1437]: 2025-05-13 00:10:01.764 [INFO][5646] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:10:01.777012 containerd[1437]: 2025-05-13 00:10:01.772 [WARNING][5646] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e8678be89270303df5c1f3c49079f4fd9cc7c27e6bc86107eaa1c2731c904c11" HandleID="k8s-pod-network.e8678be89270303df5c1f3c49079f4fd9cc7c27e6bc86107eaa1c2731c904c11" Workload="localhost-k8s-calico--apiserver--78445c69b9--vz5v9-eth0" May 13 00:10:01.777012 containerd[1437]: 2025-05-13 00:10:01.772 [INFO][5646] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e8678be89270303df5c1f3c49079f4fd9cc7c27e6bc86107eaa1c2731c904c11" HandleID="k8s-pod-network.e8678be89270303df5c1f3c49079f4fd9cc7c27e6bc86107eaa1c2731c904c11" Workload="localhost-k8s-calico--apiserver--78445c69b9--vz5v9-eth0" May 13 00:10:01.777012 containerd[1437]: 2025-05-13 00:10:01.774 [INFO][5646] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:10:01.777012 containerd[1437]: 2025-05-13 00:10:01.775 [INFO][5638] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e8678be89270303df5c1f3c49079f4fd9cc7c27e6bc86107eaa1c2731c904c11" May 13 00:10:01.777012 containerd[1437]: time="2025-05-13T00:10:01.776993992Z" level=info msg="TearDown network for sandbox \"e8678be89270303df5c1f3c49079f4fd9cc7c27e6bc86107eaa1c2731c904c11\" successfully" May 13 00:10:01.780044 containerd[1437]: time="2025-05-13T00:10:01.780006607Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e8678be89270303df5c1f3c49079f4fd9cc7c27e6bc86107eaa1c2731c904c11\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 13 00:10:01.780126 containerd[1437]: time="2025-05-13T00:10:01.780073527Z" level=info msg="RemovePodSandbox \"e8678be89270303df5c1f3c49079f4fd9cc7c27e6bc86107eaa1c2731c904c11\" returns successfully" May 13 00:10:06.476475 systemd[1]: Started sshd@19-10.0.0.16:22-10.0.0.1:52938.service - OpenSSH per-connection server daemon (10.0.0.1:52938). May 13 00:10:06.510191 sshd[5655]: Accepted publickey for core from 10.0.0.1 port 52938 ssh2: RSA SHA256:ilwLBGyeejLKSU0doRti0j2W4iQ88Tp+35jhkd0iwiU May 13 00:10:06.511690 sshd[5655]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:10:06.516186 systemd-logind[1419]: New session 20 of user core. May 13 00:10:06.526459 systemd[1]: Started session-20.scope - Session 20 of User core. May 13 00:10:06.678796 sshd[5655]: pam_unix(sshd:session): session closed for user core May 13 00:10:06.682083 systemd[1]: sshd@19-10.0.0.16:22-10.0.0.1:52938.service: Deactivated successfully. May 13 00:10:06.684611 systemd[1]: session-20.scope: Deactivated successfully. May 13 00:10:06.685427 systemd-logind[1419]: Session 20 logged out. Waiting for processes to exit. May 13 00:10:06.686272 systemd-logind[1419]: Removed session 20.