May 13 00:21:39.915886 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] May 13 00:21:39.915907 kernel: Linux version 6.6.89-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Mon May 12 22:51:32 -00 2025 May 13 00:21:39.915917 kernel: KASLR enabled May 13 00:21:39.915922 kernel: efi: EFI v2.7 by EDK II May 13 00:21:39.915928 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdba86018 ACPI 2.0=0xd9710018 RNG=0xd971e498 MEMRESERVE=0xd9b43d18 May 13 00:21:39.915934 kernel: random: crng init done May 13 00:21:39.915941 kernel: ACPI: Early table checksum verification disabled May 13 00:21:39.915947 kernel: ACPI: RSDP 0x00000000D9710018 000024 (v02 BOCHS ) May 13 00:21:39.915953 kernel: ACPI: XSDT 0x00000000D971FE98 000064 (v01 BOCHS BXPC 00000001 01000013) May 13 00:21:39.915961 kernel: ACPI: FACP 0x00000000D971FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:21:39.915967 kernel: ACPI: DSDT 0x00000000D9717518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:21:39.915973 kernel: ACPI: APIC 0x00000000D971FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:21:39.915979 kernel: ACPI: PPTT 0x00000000D971D898 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:21:39.915985 kernel: ACPI: GTDT 0x00000000D971E818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:21:39.915993 kernel: ACPI: MCFG 0x00000000D971E918 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:21:39.916001 kernel: ACPI: SPCR 0x00000000D971FF98 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:21:39.916007 kernel: ACPI: DBG2 0x00000000D971E418 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:21:39.916014 kernel: ACPI: IORT 0x00000000D971E718 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:21:39.916020 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 May 13 00:21:39.916027 kernel: NUMA: Failed to initialise from firmware May 13 00:21:39.916033 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] May 13 00:21:39.916039 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] May 13 00:21:39.916045 kernel: Zone ranges: May 13 00:21:39.916064 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] May 13 00:21:39.916070 kernel: DMA32 empty May 13 00:21:39.916077 kernel: Normal empty May 13 00:21:39.916084 kernel: Movable zone start for each node May 13 00:21:39.916091 kernel: Early memory node ranges May 13 00:21:39.916097 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] May 13 00:21:39.916103 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] May 13 00:21:39.916110 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] May 13 00:21:39.916116 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] May 13 00:21:39.916122 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] May 13 00:21:39.916129 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] May 13 00:21:39.916135 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] May 13 00:21:39.916142 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] May 13 00:21:39.916148 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges May 13 00:21:39.916156 kernel: psci: probing for conduit method from ACPI. May 13 00:21:39.916162 kernel: psci: PSCIv1.1 detected in firmware. May 13 00:21:39.916169 kernel: psci: Using standard PSCI v0.2 function IDs May 13 00:21:39.916178 kernel: psci: Trusted OS migration not required May 13 00:21:39.916185 kernel: psci: SMC Calling Convention v1.1 May 13 00:21:39.916192 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) May 13 00:21:39.916200 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 May 13 00:21:39.916206 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 May 13 00:21:39.916213 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 May 13 00:21:39.916220 kernel: Detected PIPT I-cache on CPU0 May 13 00:21:39.916227 kernel: CPU features: detected: GIC system register CPU interface May 13 00:21:39.916234 kernel: CPU features: detected: Hardware dirty bit management May 13 00:21:39.916241 kernel: CPU features: detected: Spectre-v4 May 13 00:21:39.916248 kernel: CPU features: detected: Spectre-BHB May 13 00:21:39.916254 kernel: CPU features: kernel page table isolation forced ON by KASLR May 13 00:21:39.916261 kernel: CPU features: detected: Kernel page table isolation (KPTI) May 13 00:21:39.916269 kernel: CPU features: detected: ARM erratum 1418040 May 13 00:21:39.916276 kernel: CPU features: detected: SSBS not fully self-synchronizing May 13 00:21:39.916283 kernel: alternatives: applying boot alternatives May 13 00:21:39.916291 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=c683f9f6a9915f3c14a7bce5c93750f29fcd5cf6eb0774e11e882c5681cc19c0 May 13 00:21:39.916298 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 13 00:21:39.916305 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 13 00:21:39.916312 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 13 00:21:39.916319 kernel: Fallback order for Node 0: 0 May 13 00:21:39.916326 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 May 13 00:21:39.916333 kernel: Policy zone: DMA May 13 00:21:39.916340 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 13 00:21:39.916349 kernel: software IO TLB: area num 4. May 13 00:21:39.916356 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) May 13 00:21:39.916364 kernel: Memory: 2386404K/2572288K available (10304K kernel code, 2186K rwdata, 8104K rodata, 39424K init, 897K bss, 185884K reserved, 0K cma-reserved) May 13 00:21:39.916371 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 13 00:21:39.916379 kernel: rcu: Preemptible hierarchical RCU implementation. May 13 00:21:39.916386 kernel: rcu: RCU event tracing is enabled. May 13 00:21:39.916394 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 13 00:21:39.916401 kernel: Trampoline variant of Tasks RCU enabled. May 13 00:21:39.916409 kernel: Tracing variant of Tasks RCU enabled. May 13 00:21:39.916416 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 13 00:21:39.916423 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 13 00:21:39.916430 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 May 13 00:21:39.916439 kernel: GICv3: 256 SPIs implemented May 13 00:21:39.916446 kernel: GICv3: 0 Extended SPIs implemented May 13 00:21:39.916453 kernel: Root IRQ handler: gic_handle_irq May 13 00:21:39.916460 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI May 13 00:21:39.916467 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 May 13 00:21:39.916474 kernel: ITS [mem 0x08080000-0x0809ffff] May 13 00:21:39.916482 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) May 13 00:21:39.916489 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) May 13 00:21:39.916496 kernel: GICv3: using LPI property table @0x00000000400f0000 May 13 00:21:39.916504 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 May 13 00:21:39.916511 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 13 00:21:39.916519 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 00:21:39.916525 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). May 13 00:21:39.916533 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns May 13 00:21:39.916540 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns May 13 00:21:39.916546 kernel: arm-pv: using stolen time PV May 13 00:21:39.916554 kernel: Console: colour dummy device 80x25 May 13 00:21:39.916561 kernel: ACPI: Core revision 20230628 May 13 00:21:39.916569 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) May 13 00:21:39.916576 kernel: pid_max: default: 32768 minimum: 301 May 13 00:21:39.916583 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 13 00:21:39.916591 kernel: landlock: Up and running. May 13 00:21:39.916598 kernel: SELinux: Initializing. May 13 00:21:39.916605 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 13 00:21:39.916612 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 13 00:21:39.916620 kernel: ACPI PPTT: PPTT table found, but unable to locate core 3 (3) May 13 00:21:39.916627 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 13 00:21:39.916634 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 13 00:21:39.916642 kernel: rcu: Hierarchical SRCU implementation. May 13 00:21:39.916649 kernel: rcu: Max phase no-delay instances is 400. May 13 00:21:39.916658 kernel: Platform MSI: ITS@0x8080000 domain created May 13 00:21:39.916679 kernel: PCI/MSI: ITS@0x8080000 domain created May 13 00:21:39.916686 kernel: Remapping and enabling EFI services. May 13 00:21:39.916694 kernel: smp: Bringing up secondary CPUs ... May 13 00:21:39.916701 kernel: Detected PIPT I-cache on CPU1 May 13 00:21:39.916707 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 May 13 00:21:39.916715 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 May 13 00:21:39.916722 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 00:21:39.916729 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] May 13 00:21:39.916737 kernel: Detected PIPT I-cache on CPU2 May 13 00:21:39.916744 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 May 13 00:21:39.916751 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 May 13 00:21:39.916763 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 00:21:39.916771 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] May 13 00:21:39.916778 kernel: Detected PIPT I-cache on CPU3 May 13 00:21:39.916785 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 May 13 00:21:39.916792 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 May 13 00:21:39.916800 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 00:21:39.916812 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] May 13 00:21:39.916820 kernel: smp: Brought up 1 node, 4 CPUs May 13 00:21:39.916829 kernel: SMP: Total of 4 processors activated. May 13 00:21:39.916836 kernel: CPU features: detected: 32-bit EL0 Support May 13 00:21:39.916844 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence May 13 00:21:39.916851 kernel: CPU features: detected: Common not Private translations May 13 00:21:39.916858 kernel: CPU features: detected: CRC32 instructions May 13 00:21:39.916865 kernel: CPU features: detected: Enhanced Virtualization Traps May 13 00:21:39.916874 kernel: CPU features: detected: RCpc load-acquire (LDAPR) May 13 00:21:39.916881 kernel: CPU features: detected: LSE atomic instructions May 13 00:21:39.916888 kernel: CPU features: detected: Privileged Access Never May 13 00:21:39.916895 kernel: CPU features: detected: RAS Extension Support May 13 00:21:39.916902 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) May 13 00:21:39.916910 kernel: CPU: All CPU(s) started at EL1 May 13 00:21:39.916917 kernel: alternatives: applying system-wide alternatives May 13 00:21:39.916924 kernel: devtmpfs: initialized May 13 00:21:39.916931 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 13 00:21:39.916940 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 13 00:21:39.916947 kernel: pinctrl core: initialized pinctrl subsystem May 13 00:21:39.916954 kernel: SMBIOS 3.0.0 present. May 13 00:21:39.916962 kernel: DMI: QEMU KVM Virtual Machine, BIOS edk2-20230524-3.fc38 05/24/2023 May 13 00:21:39.916969 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 13 00:21:39.916976 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations May 13 00:21:39.916983 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations May 13 00:21:39.916991 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations May 13 00:21:39.916998 kernel: audit: initializing netlink subsys (disabled) May 13 00:21:39.917007 kernel: audit: type=2000 audit(0.024:1): state=initialized audit_enabled=0 res=1 May 13 00:21:39.917014 kernel: thermal_sys: Registered thermal governor 'step_wise' May 13 00:21:39.917021 kernel: cpuidle: using governor menu May 13 00:21:39.917028 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. May 13 00:21:39.917035 kernel: ASID allocator initialised with 32768 entries May 13 00:21:39.917048 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 13 00:21:39.917056 kernel: Serial: AMBA PL011 UART driver May 13 00:21:39.917063 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL May 13 00:21:39.917070 kernel: Modules: 0 pages in range for non-PLT usage May 13 00:21:39.917078 kernel: Modules: 509008 pages in range for PLT usage May 13 00:21:39.917086 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 13 00:21:39.917093 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page May 13 00:21:39.917100 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages May 13 00:21:39.917107 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page May 13 00:21:39.917115 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 13 00:21:39.917122 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page May 13 00:21:39.917129 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages May 13 00:21:39.917136 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page May 13 00:21:39.917143 kernel: ACPI: Added _OSI(Module Device) May 13 00:21:39.917152 kernel: ACPI: Added _OSI(Processor Device) May 13 00:21:39.917160 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 13 00:21:39.917167 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 13 00:21:39.917174 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 13 00:21:39.917181 kernel: ACPI: Interpreter enabled May 13 00:21:39.917189 kernel: ACPI: Using GIC for interrupt routing May 13 00:21:39.917196 kernel: ACPI: MCFG table detected, 1 entries May 13 00:21:39.917203 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA May 13 00:21:39.917210 kernel: printk: console [ttyAMA0] enabled May 13 00:21:39.917218 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 13 00:21:39.917350 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 13 00:21:39.917426 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] May 13 00:21:39.917492 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] May 13 00:21:39.917556 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 May 13 00:21:39.917634 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] May 13 00:21:39.917643 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] May 13 00:21:39.917653 kernel: PCI host bridge to bus 0000:00 May 13 00:21:39.917739 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] May 13 00:21:39.917801 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] May 13 00:21:39.917872 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] May 13 00:21:39.917933 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 13 00:21:39.918012 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 May 13 00:21:39.918093 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 May 13 00:21:39.918160 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] May 13 00:21:39.918227 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] May 13 00:21:39.918293 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] May 13 00:21:39.918359 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] May 13 00:21:39.918425 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] May 13 00:21:39.918491 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] May 13 00:21:39.918552 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] May 13 00:21:39.918610 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] May 13 00:21:39.918695 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] May 13 00:21:39.918707 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 May 13 00:21:39.918714 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 May 13 00:21:39.918722 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 May 13 00:21:39.918729 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 May 13 00:21:39.918736 kernel: iommu: Default domain type: Translated May 13 00:21:39.918747 kernel: iommu: DMA domain TLB invalidation policy: strict mode May 13 00:21:39.918754 kernel: efivars: Registered efivars operations May 13 00:21:39.918761 kernel: vgaarb: loaded May 13 00:21:39.918768 kernel: clocksource: Switched to clocksource arch_sys_counter May 13 00:21:39.918776 kernel: VFS: Disk quotas dquot_6.6.0 May 13 00:21:39.918783 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 13 00:21:39.918790 kernel: pnp: PnP ACPI init May 13 00:21:39.918882 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved May 13 00:21:39.918894 kernel: pnp: PnP ACPI: found 1 devices May 13 00:21:39.918904 kernel: NET: Registered PF_INET protocol family May 13 00:21:39.918912 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 13 00:21:39.918919 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 13 00:21:39.918926 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 13 00:21:39.918934 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 13 00:21:39.918941 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 13 00:21:39.918948 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 13 00:21:39.918955 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 13 00:21:39.918964 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 13 00:21:39.918971 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 13 00:21:39.918979 kernel: PCI: CLS 0 bytes, default 64 May 13 00:21:39.918986 kernel: kvm [1]: HYP mode not available May 13 00:21:39.918993 kernel: Initialise system trusted keyrings May 13 00:21:39.919000 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 13 00:21:39.919007 kernel: Key type asymmetric registered May 13 00:21:39.919014 kernel: Asymmetric key parser 'x509' registered May 13 00:21:39.919021 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) May 13 00:21:39.919029 kernel: io scheduler mq-deadline registered May 13 00:21:39.919037 kernel: io scheduler kyber registered May 13 00:21:39.919044 kernel: io scheduler bfq registered May 13 00:21:39.919052 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 May 13 00:21:39.919059 kernel: ACPI: button: Power Button [PWRB] May 13 00:21:39.919066 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 May 13 00:21:39.919140 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) May 13 00:21:39.919151 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 13 00:21:39.919158 kernel: thunder_xcv, ver 1.0 May 13 00:21:39.919165 kernel: thunder_bgx, ver 1.0 May 13 00:21:39.919174 kernel: nicpf, ver 1.0 May 13 00:21:39.919181 kernel: nicvf, ver 1.0 May 13 00:21:39.919257 kernel: rtc-efi rtc-efi.0: registered as rtc0 May 13 00:21:39.919321 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-05-13T00:21:39 UTC (1747095699) May 13 00:21:39.919331 kernel: hid: raw HID events driver (C) Jiri Kosina May 13 00:21:39.919338 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available May 13 00:21:39.919346 kernel: watchdog: Delayed init of the lockup detector failed: -19 May 13 00:21:39.919353 kernel: watchdog: Hard watchdog permanently disabled May 13 00:21:39.919362 kernel: NET: Registered PF_INET6 protocol family May 13 00:21:39.919369 kernel: Segment Routing with IPv6 May 13 00:21:39.919376 kernel: In-situ OAM (IOAM) with IPv6 May 13 00:21:39.919384 kernel: NET: Registered PF_PACKET protocol family May 13 00:21:39.919391 kernel: Key type dns_resolver registered May 13 00:21:39.919398 kernel: registered taskstats version 1 May 13 00:21:39.919406 kernel: Loading compiled-in X.509 certificates May 13 00:21:39.919413 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.89-flatcar: ce22d51a4ec909274ada9cb7da7d7cb78db539c6' May 13 00:21:39.919420 kernel: Key type .fscrypt registered May 13 00:21:39.919429 kernel: Key type fscrypt-provisioning registered May 13 00:21:39.919437 kernel: ima: No TPM chip found, activating TPM-bypass! May 13 00:21:39.919444 kernel: ima: Allocated hash algorithm: sha1 May 13 00:21:39.919452 kernel: ima: No architecture policies found May 13 00:21:39.919459 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) May 13 00:21:39.919466 kernel: clk: Disabling unused clocks May 13 00:21:39.919473 kernel: Freeing unused kernel memory: 39424K May 13 00:21:39.919481 kernel: Run /init as init process May 13 00:21:39.919488 kernel: with arguments: May 13 00:21:39.919496 kernel: /init May 13 00:21:39.919503 kernel: with environment: May 13 00:21:39.919510 kernel: HOME=/ May 13 00:21:39.919517 kernel: TERM=linux May 13 00:21:39.919524 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 13 00:21:39.919533 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 13 00:21:39.919543 systemd[1]: Detected virtualization kvm. May 13 00:21:39.919552 systemd[1]: Detected architecture arm64. May 13 00:21:39.919559 systemd[1]: Running in initrd. May 13 00:21:39.919567 systemd[1]: No hostname configured, using default hostname. May 13 00:21:39.919575 systemd[1]: Hostname set to . May 13 00:21:39.919583 systemd[1]: Initializing machine ID from VM UUID. May 13 00:21:39.919591 systemd[1]: Queued start job for default target initrd.target. May 13 00:21:39.919598 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 13 00:21:39.919606 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 13 00:21:39.919616 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 13 00:21:39.919624 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 13 00:21:39.919632 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 13 00:21:39.919641 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 13 00:21:39.919650 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 13 00:21:39.919658 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 13 00:21:39.919746 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 13 00:21:39.919759 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 13 00:21:39.919767 systemd[1]: Reached target paths.target - Path Units. May 13 00:21:39.919775 systemd[1]: Reached target slices.target - Slice Units. May 13 00:21:39.919783 systemd[1]: Reached target swap.target - Swaps. May 13 00:21:39.919791 systemd[1]: Reached target timers.target - Timer Units. May 13 00:21:39.919798 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 13 00:21:39.919813 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 13 00:21:39.919821 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 13 00:21:39.919829 systemd[1]: Listening on systemd-journald.socket - Journal Socket. May 13 00:21:39.919839 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 13 00:21:39.919847 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 13 00:21:39.919855 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 13 00:21:39.919863 systemd[1]: Reached target sockets.target - Socket Units. May 13 00:21:39.919871 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 13 00:21:39.919879 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 13 00:21:39.919887 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 13 00:21:39.919895 systemd[1]: Starting systemd-fsck-usr.service... May 13 00:21:39.919904 systemd[1]: Starting systemd-journald.service - Journal Service... May 13 00:21:39.919913 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 13 00:21:39.919921 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 00:21:39.919929 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 13 00:21:39.919937 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 13 00:21:39.919945 systemd[1]: Finished systemd-fsck-usr.service. May 13 00:21:39.919955 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 13 00:21:39.919963 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 13 00:21:39.919992 systemd-journald[237]: Collecting audit messages is disabled. May 13 00:21:39.920014 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 13 00:21:39.920022 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 13 00:21:39.920031 systemd-journald[237]: Journal started May 13 00:21:39.920050 systemd-journald[237]: Runtime Journal (/run/log/journal/39fa052bf65c42a98623465af7ee0640) is 5.9M, max 47.3M, 41.4M free. May 13 00:21:39.912747 systemd-modules-load[238]: Inserted module 'overlay' May 13 00:21:39.927209 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 13 00:21:39.927244 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 13 00:21:39.927255 systemd[1]: Started systemd-journald.service - Journal Service. May 13 00:21:39.927265 kernel: Bridge firewalling registered May 13 00:21:39.927555 systemd-modules-load[238]: Inserted module 'br_netfilter' May 13 00:21:39.929902 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 13 00:21:39.933110 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 13 00:21:39.934429 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 13 00:21:39.938441 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 13 00:21:39.942900 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 00:21:39.943994 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 13 00:21:39.945595 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 13 00:21:39.956892 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 13 00:21:39.959031 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 13 00:21:39.966864 dracut-cmdline[274]: dracut-dracut-053 May 13 00:21:39.969272 dracut-cmdline[274]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=c683f9f6a9915f3c14a7bce5c93750f29fcd5cf6eb0774e11e882c5681cc19c0 May 13 00:21:39.984322 systemd-resolved[277]: Positive Trust Anchors: May 13 00:21:39.984338 systemd-resolved[277]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 13 00:21:39.984371 systemd-resolved[277]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 13 00:21:39.989023 systemd-resolved[277]: Defaulting to hostname 'linux'. May 13 00:21:39.990091 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 13 00:21:39.991284 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 13 00:21:40.037697 kernel: SCSI subsystem initialized May 13 00:21:40.042688 kernel: Loading iSCSI transport class v2.0-870. May 13 00:21:40.050706 kernel: iscsi: registered transport (tcp) May 13 00:21:40.062873 kernel: iscsi: registered transport (qla4xxx) May 13 00:21:40.062903 kernel: QLogic iSCSI HBA Driver May 13 00:21:40.106222 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 13 00:21:40.114823 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 13 00:21:40.132166 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 13 00:21:40.132210 kernel: device-mapper: uevent: version 1.0.3 May 13 00:21:40.132227 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 13 00:21:40.178687 kernel: raid6: neonx8 gen() 15799 MB/s May 13 00:21:40.195687 kernel: raid6: neonx4 gen() 15666 MB/s May 13 00:21:40.212686 kernel: raid6: neonx2 gen() 13230 MB/s May 13 00:21:40.229676 kernel: raid6: neonx1 gen() 10491 MB/s May 13 00:21:40.246686 kernel: raid6: int64x8 gen() 6971 MB/s May 13 00:21:40.263685 kernel: raid6: int64x4 gen() 7341 MB/s May 13 00:21:40.280676 kernel: raid6: int64x2 gen() 6136 MB/s May 13 00:21:40.297677 kernel: raid6: int64x1 gen() 5059 MB/s May 13 00:21:40.297698 kernel: raid6: using algorithm neonx8 gen() 15799 MB/s May 13 00:21:40.314686 kernel: raid6: .... xor() 11928 MB/s, rmw enabled May 13 00:21:40.314713 kernel: raid6: using neon recovery algorithm May 13 00:21:40.319677 kernel: xor: measuring software checksum speed May 13 00:21:40.319690 kernel: 8regs : 19826 MB/sec May 13 00:21:40.321070 kernel: 32regs : 18374 MB/sec May 13 00:21:40.321084 kernel: arm64_neon : 27052 MB/sec May 13 00:21:40.321099 kernel: xor: using function: arm64_neon (27052 MB/sec) May 13 00:21:40.370690 kernel: Btrfs loaded, zoned=no, fsverity=no May 13 00:21:40.381059 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 13 00:21:40.390835 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 13 00:21:40.404827 systemd-udevd[459]: Using default interface naming scheme 'v255'. May 13 00:21:40.408221 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 13 00:21:40.415899 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 13 00:21:40.426831 dracut-pre-trigger[466]: rd.md=0: removing MD RAID activation May 13 00:21:40.454816 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 13 00:21:40.465821 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 13 00:21:40.504232 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 13 00:21:40.513837 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 13 00:21:40.527748 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 13 00:21:40.530519 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 13 00:21:40.531448 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 13 00:21:40.534015 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 13 00:21:40.541828 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 13 00:21:40.550962 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 13 00:21:40.557202 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues May 13 00:21:40.560956 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 13 00:21:40.565065 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 13 00:21:40.565106 kernel: GPT:9289727 != 19775487 May 13 00:21:40.565117 kernel: GPT:Alternate GPT header not at the end of the disk. May 13 00:21:40.565126 kernel: GPT:9289727 != 19775487 May 13 00:21:40.565141 kernel: GPT: Use GNU Parted to correct GPT errors. May 13 00:21:40.566327 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 00:21:40.568194 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 13 00:21:40.568305 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 00:21:40.572703 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 13 00:21:40.573642 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 13 00:21:40.573843 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 13 00:21:40.576291 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 13 00:21:40.585690 kernel: BTRFS: device fsid ffc5eb33-beca-4ca0-9735-b9a50e66f21e devid 1 transid 40 /dev/vda3 scanned by (udev-worker) (522) May 13 00:21:40.585718 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (524) May 13 00:21:40.592902 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 00:21:40.604709 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 13 00:21:40.610154 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 13 00:21:40.614556 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 13 00:21:40.621116 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 13 00:21:40.622015 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 13 00:21:40.627697 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 13 00:21:40.644858 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 13 00:21:40.646710 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 13 00:21:40.651256 disk-uuid[551]: Primary Header is updated. May 13 00:21:40.651256 disk-uuid[551]: Secondary Entries is updated. May 13 00:21:40.651256 disk-uuid[551]: Secondary Header is updated. May 13 00:21:40.655716 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 00:21:40.669432 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 00:21:41.667731 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 00:21:41.668053 disk-uuid[553]: The operation has completed successfully. May 13 00:21:41.685460 systemd[1]: disk-uuid.service: Deactivated successfully. May 13 00:21:41.685556 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 13 00:21:41.709853 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 13 00:21:41.712412 sh[577]: Success May 13 00:21:41.726872 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" May 13 00:21:41.761184 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 13 00:21:41.763010 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 13 00:21:41.764608 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 13 00:21:41.775269 kernel: BTRFS info (device dm-0): first mount of filesystem ffc5eb33-beca-4ca0-9735-b9a50e66f21e May 13 00:21:41.775304 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm May 13 00:21:41.775315 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 13 00:21:41.775325 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 13 00:21:41.776676 kernel: BTRFS info (device dm-0): using free space tree May 13 00:21:41.779635 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 13 00:21:41.780944 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 13 00:21:41.792831 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 13 00:21:41.794345 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 13 00:21:41.801954 kernel: BTRFS info (device vda6): first mount of filesystem 0068254f-7e0d-4c83-ad3e-204802432981 May 13 00:21:41.801993 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 13 00:21:41.802004 kernel: BTRFS info (device vda6): using free space tree May 13 00:21:41.804693 kernel: BTRFS info (device vda6): auto enabling async discard May 13 00:21:41.811580 systemd[1]: mnt-oem.mount: Deactivated successfully. May 13 00:21:41.812865 kernel: BTRFS info (device vda6): last unmount of filesystem 0068254f-7e0d-4c83-ad3e-204802432981 May 13 00:21:41.818093 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 13 00:21:41.826790 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 13 00:21:41.886021 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 13 00:21:41.899717 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 13 00:21:41.926589 systemd-networkd[765]: lo: Link UP May 13 00:21:41.926602 systemd-networkd[765]: lo: Gained carrier May 13 00:21:41.927566 systemd-networkd[765]: Enumeration completed May 13 00:21:41.927661 systemd[1]: Started systemd-networkd.service - Network Configuration. May 13 00:21:41.928249 systemd-networkd[765]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 00:21:41.928252 systemd-networkd[765]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 13 00:21:41.928976 systemd[1]: Reached target network.target - Network. May 13 00:21:41.929442 systemd-networkd[765]: eth0: Link UP May 13 00:21:41.934278 ignition[672]: Ignition 2.19.0 May 13 00:21:41.929445 systemd-networkd[765]: eth0: Gained carrier May 13 00:21:41.934284 ignition[672]: Stage: fetch-offline May 13 00:21:41.929451 systemd-networkd[765]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 00:21:41.934317 ignition[672]: no configs at "/usr/lib/ignition/base.d" May 13 00:21:41.934325 ignition[672]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 00:21:41.934469 ignition[672]: parsed url from cmdline: "" May 13 00:21:41.934472 ignition[672]: no config URL provided May 13 00:21:41.934476 ignition[672]: reading system config file "/usr/lib/ignition/user.ign" May 13 00:21:41.934483 ignition[672]: no config at "/usr/lib/ignition/user.ign" May 13 00:21:41.934505 ignition[672]: op(1): [started] loading QEMU firmware config module May 13 00:21:41.934509 ignition[672]: op(1): executing: "modprobe" "qemu_fw_cfg" May 13 00:21:41.947472 ignition[672]: op(1): [finished] loading QEMU firmware config module May 13 00:21:41.947708 systemd-networkd[765]: eth0: DHCPv4 address 10.0.0.71/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 13 00:21:41.986317 ignition[672]: parsing config with SHA512: 61f0b4980d5ed9dcaef19cba2b48b1f1fbc934c5a819606c101d537099ca29480e593dac73e369a979c4a5f9c3bf8b1ae02e362aa8af49303f6570d20a55ea07 May 13 00:21:41.991486 unknown[672]: fetched base config from "system" May 13 00:21:41.991502 unknown[672]: fetched user config from "qemu" May 13 00:21:41.992444 ignition[672]: fetch-offline: fetch-offline passed May 13 00:21:41.992885 ignition[672]: Ignition finished successfully May 13 00:21:41.994303 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 13 00:21:41.995709 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 13 00:21:42.002862 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 13 00:21:42.012520 ignition[773]: Ignition 2.19.0 May 13 00:21:42.012530 ignition[773]: Stage: kargs May 13 00:21:42.012712 ignition[773]: no configs at "/usr/lib/ignition/base.d" May 13 00:21:42.012721 ignition[773]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 00:21:42.013561 ignition[773]: kargs: kargs passed May 13 00:21:42.013599 ignition[773]: Ignition finished successfully May 13 00:21:42.015604 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 13 00:21:42.017516 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 13 00:21:42.030534 ignition[782]: Ignition 2.19.0 May 13 00:21:42.030545 ignition[782]: Stage: disks May 13 00:21:42.030733 ignition[782]: no configs at "/usr/lib/ignition/base.d" May 13 00:21:42.030742 ignition[782]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 00:21:42.031600 ignition[782]: disks: disks passed May 13 00:21:42.031643 ignition[782]: Ignition finished successfully May 13 00:21:42.033731 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 13 00:21:42.035491 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 13 00:21:42.037024 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 13 00:21:42.038590 systemd[1]: Reached target local-fs.target - Local File Systems. May 13 00:21:42.040169 systemd[1]: Reached target sysinit.target - System Initialization. May 13 00:21:42.041511 systemd[1]: Reached target basic.target - Basic System. May 13 00:21:42.048866 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 13 00:21:42.059267 systemd-fsck[792]: ROOT: clean, 14/553520 files, 52654/553472 blocks May 13 00:21:42.063376 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 13 00:21:42.076775 systemd[1]: Mounting sysroot.mount - /sysroot... May 13 00:21:42.121691 kernel: EXT4-fs (vda9): mounted filesystem 9903c37e-4e5a-41d4-80e5-5c3428d04b7e r/w with ordered data mode. Quota mode: none. May 13 00:21:42.121852 systemd[1]: Mounted sysroot.mount - /sysroot. May 13 00:21:42.123075 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 13 00:21:42.135746 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 13 00:21:42.137847 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 13 00:21:42.138859 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 13 00:21:42.138899 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 13 00:21:42.138920 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 13 00:21:42.144858 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 13 00:21:42.147320 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (800) May 13 00:21:42.147885 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 13 00:21:42.152613 kernel: BTRFS info (device vda6): first mount of filesystem 0068254f-7e0d-4c83-ad3e-204802432981 May 13 00:21:42.152636 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 13 00:21:42.152647 kernel: BTRFS info (device vda6): using free space tree May 13 00:21:42.152662 kernel: BTRFS info (device vda6): auto enabling async discard May 13 00:21:42.153852 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 13 00:21:42.195635 initrd-setup-root[825]: cut: /sysroot/etc/passwd: No such file or directory May 13 00:21:42.199725 initrd-setup-root[832]: cut: /sysroot/etc/group: No such file or directory May 13 00:21:42.203512 initrd-setup-root[839]: cut: /sysroot/etc/shadow: No such file or directory May 13 00:21:42.207410 initrd-setup-root[846]: cut: /sysroot/etc/gshadow: No such file or directory May 13 00:21:42.275368 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 13 00:21:42.285777 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 13 00:21:42.288258 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 13 00:21:42.292676 kernel: BTRFS info (device vda6): last unmount of filesystem 0068254f-7e0d-4c83-ad3e-204802432981 May 13 00:21:42.307879 ignition[914]: INFO : Ignition 2.19.0 May 13 00:21:42.307879 ignition[914]: INFO : Stage: mount May 13 00:21:42.310740 ignition[914]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 00:21:42.310740 ignition[914]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 00:21:42.310740 ignition[914]: INFO : mount: mount passed May 13 00:21:42.310740 ignition[914]: INFO : Ignition finished successfully May 13 00:21:42.307945 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 13 00:21:42.310383 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 13 00:21:42.319775 systemd[1]: Starting ignition-files.service - Ignition (files)... May 13 00:21:42.773877 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 13 00:21:42.785829 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 13 00:21:42.791443 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (928) May 13 00:21:42.791474 kernel: BTRFS info (device vda6): first mount of filesystem 0068254f-7e0d-4c83-ad3e-204802432981 May 13 00:21:42.791484 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 13 00:21:42.792676 kernel: BTRFS info (device vda6): using free space tree May 13 00:21:42.794680 kernel: BTRFS info (device vda6): auto enabling async discard May 13 00:21:42.795602 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 13 00:21:42.811572 ignition[945]: INFO : Ignition 2.19.0 May 13 00:21:42.811572 ignition[945]: INFO : Stage: files May 13 00:21:42.813136 ignition[945]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 00:21:42.813136 ignition[945]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 00:21:42.813136 ignition[945]: DEBUG : files: compiled without relabeling support, skipping May 13 00:21:42.816363 ignition[945]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 13 00:21:42.816363 ignition[945]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 13 00:21:42.819469 ignition[945]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 13 00:21:42.820791 ignition[945]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 13 00:21:42.820791 ignition[945]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 13 00:21:42.820061 unknown[945]: wrote ssh authorized keys file for user: core May 13 00:21:42.824481 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 13 00:21:42.824481 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 May 13 00:21:42.863587 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 13 00:21:43.444466 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 13 00:21:43.446532 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" May 13 00:21:43.446532 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" May 13 00:21:43.446532 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" May 13 00:21:43.446532 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" May 13 00:21:43.446532 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 13 00:21:43.446532 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 13 00:21:43.446532 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 13 00:21:43.446532 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 13 00:21:43.446532 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" May 13 00:21:43.446532 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 13 00:21:43.446532 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 13 00:21:43.446532 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 13 00:21:43.446532 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 13 00:21:43.446532 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 May 13 00:21:43.762901 systemd-networkd[765]: eth0: Gained IPv6LL May 13 00:21:43.775679 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK May 13 00:21:44.048043 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 13 00:21:44.048043 ignition[945]: INFO : files: op(b): [started] processing unit "prepare-helm.service" May 13 00:21:44.051733 ignition[945]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 13 00:21:44.051733 ignition[945]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 13 00:21:44.051733 ignition[945]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" May 13 00:21:44.051733 ignition[945]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" May 13 00:21:44.051733 ignition[945]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 13 00:21:44.051733 ignition[945]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 13 00:21:44.051733 ignition[945]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" May 13 00:21:44.051733 ignition[945]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" May 13 00:21:44.072495 ignition[945]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" May 13 00:21:44.076455 ignition[945]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 13 00:21:44.078828 ignition[945]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" May 13 00:21:44.078828 ignition[945]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" May 13 00:21:44.078828 ignition[945]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" May 13 00:21:44.078828 ignition[945]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" May 13 00:21:44.078828 ignition[945]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" May 13 00:21:44.078828 ignition[945]: INFO : files: files passed May 13 00:21:44.078828 ignition[945]: INFO : Ignition finished successfully May 13 00:21:44.079477 systemd[1]: Finished ignition-files.service - Ignition (files). May 13 00:21:44.091845 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 13 00:21:44.094870 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 13 00:21:44.097165 systemd[1]: ignition-quench.service: Deactivated successfully. May 13 00:21:44.097258 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 13 00:21:44.102099 initrd-setup-root-after-ignition[975]: grep: /sysroot/oem/oem-release: No such file or directory May 13 00:21:44.105252 initrd-setup-root-after-ignition[977]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 13 00:21:44.105252 initrd-setup-root-after-ignition[977]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 13 00:21:44.108448 initrd-setup-root-after-ignition[981]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 13 00:21:44.109831 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 13 00:21:44.110849 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 13 00:21:44.125831 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 13 00:21:44.144740 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 13 00:21:44.144851 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 13 00:21:44.146963 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 13 00:21:44.148626 systemd[1]: Reached target initrd.target - Initrd Default Target. May 13 00:21:44.150395 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 13 00:21:44.151132 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 13 00:21:44.167812 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 13 00:21:44.175892 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 13 00:21:44.183326 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 13 00:21:44.184249 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 13 00:21:44.185916 systemd[1]: Stopped target timers.target - Timer Units. May 13 00:21:44.187422 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 13 00:21:44.187537 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 13 00:21:44.189642 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 13 00:21:44.191386 systemd[1]: Stopped target basic.target - Basic System. May 13 00:21:44.192770 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 13 00:21:44.194288 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 13 00:21:44.195864 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 13 00:21:44.197767 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 13 00:21:44.199318 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 13 00:21:44.200898 systemd[1]: Stopped target sysinit.target - System Initialization. May 13 00:21:44.202537 systemd[1]: Stopped target local-fs.target - Local File Systems. May 13 00:21:44.203973 systemd[1]: Stopped target swap.target - Swaps. May 13 00:21:44.205250 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 13 00:21:44.205365 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 13 00:21:44.207354 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 13 00:21:44.208959 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 13 00:21:44.210532 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 13 00:21:44.210621 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 13 00:21:44.212387 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 13 00:21:44.212501 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 13 00:21:44.214944 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 13 00:21:44.215058 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 13 00:21:44.216601 systemd[1]: Stopped target paths.target - Path Units. May 13 00:21:44.217912 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 13 00:21:44.218002 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 13 00:21:44.219590 systemd[1]: Stopped target slices.target - Slice Units. May 13 00:21:44.221140 systemd[1]: Stopped target sockets.target - Socket Units. May 13 00:21:44.222373 systemd[1]: iscsid.socket: Deactivated successfully. May 13 00:21:44.222460 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 13 00:21:44.223882 systemd[1]: iscsiuio.socket: Deactivated successfully. May 13 00:21:44.223963 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 13 00:21:44.226064 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 13 00:21:44.226177 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 13 00:21:44.227532 systemd[1]: ignition-files.service: Deactivated successfully. May 13 00:21:44.227628 systemd[1]: Stopped ignition-files.service - Ignition (files). May 13 00:21:44.240926 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 13 00:21:44.241567 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 13 00:21:44.241710 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 13 00:21:44.246880 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 13 00:21:44.247510 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 13 00:21:44.247633 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 13 00:21:44.249193 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 13 00:21:44.252442 ignition[1001]: INFO : Ignition 2.19.0 May 13 00:21:44.252442 ignition[1001]: INFO : Stage: umount May 13 00:21:44.252442 ignition[1001]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 00:21:44.252442 ignition[1001]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 00:21:44.252442 ignition[1001]: INFO : umount: umount passed May 13 00:21:44.252442 ignition[1001]: INFO : Ignition finished successfully May 13 00:21:44.249294 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 13 00:21:44.253897 systemd[1]: ignition-mount.service: Deactivated successfully. May 13 00:21:44.253976 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 13 00:21:44.256172 systemd[1]: Stopped target network.target - Network. May 13 00:21:44.257085 systemd[1]: ignition-disks.service: Deactivated successfully. May 13 00:21:44.257140 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 13 00:21:44.258533 systemd[1]: ignition-kargs.service: Deactivated successfully. May 13 00:21:44.258572 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 13 00:21:44.260019 systemd[1]: ignition-setup.service: Deactivated successfully. May 13 00:21:44.260057 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 13 00:21:44.261508 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 13 00:21:44.261545 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 13 00:21:44.263322 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 13 00:21:44.264789 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 13 00:21:44.266995 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 13 00:21:44.267550 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 13 00:21:44.267646 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 13 00:21:44.270423 systemd-networkd[765]: eth0: DHCPv6 lease lost May 13 00:21:44.272549 systemd[1]: systemd-networkd.service: Deactivated successfully. May 13 00:21:44.272650 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 13 00:21:44.274054 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 13 00:21:44.274086 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 13 00:21:44.285848 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 13 00:21:44.287565 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 13 00:21:44.287632 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 13 00:21:44.289754 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 13 00:21:44.293793 systemd[1]: systemd-resolved.service: Deactivated successfully. May 13 00:21:44.293898 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 13 00:21:44.297511 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 13 00:21:44.297593 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 13 00:21:44.299456 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 13 00:21:44.299500 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 13 00:21:44.301300 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 13 00:21:44.301342 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 13 00:21:44.303477 systemd[1]: systemd-udevd.service: Deactivated successfully. May 13 00:21:44.304692 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 13 00:21:44.306703 systemd[1]: network-cleanup.service: Deactivated successfully. May 13 00:21:44.306796 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 13 00:21:44.309284 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 13 00:21:44.309342 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 13 00:21:44.311563 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 13 00:21:44.311595 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 13 00:21:44.313495 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 13 00:21:44.313548 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 13 00:21:44.316232 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 13 00:21:44.316277 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 13 00:21:44.318933 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 13 00:21:44.318980 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 00:21:44.330817 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 13 00:21:44.331827 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 13 00:21:44.331881 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 13 00:21:44.333921 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 13 00:21:44.333965 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 13 00:21:44.336020 systemd[1]: sysroot-boot.service: Deactivated successfully. May 13 00:21:44.337694 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 13 00:21:44.338876 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 13 00:21:44.338954 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 13 00:21:44.341238 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 13 00:21:44.342343 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 13 00:21:44.342402 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 13 00:21:44.344788 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 13 00:21:44.353648 systemd[1]: Switching root. May 13 00:21:44.380032 systemd-journald[237]: Journal stopped May 13 00:21:45.067837 systemd-journald[237]: Received SIGTERM from PID 1 (systemd). May 13 00:21:45.067895 kernel: SELinux: policy capability network_peer_controls=1 May 13 00:21:45.067908 kernel: SELinux: policy capability open_perms=1 May 13 00:21:45.067919 kernel: SELinux: policy capability extended_socket_class=1 May 13 00:21:45.067929 kernel: SELinux: policy capability always_check_network=0 May 13 00:21:45.067943 kernel: SELinux: policy capability cgroup_seclabel=1 May 13 00:21:45.067956 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 13 00:21:45.067970 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 13 00:21:45.067980 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 13 00:21:45.067990 kernel: audit: type=1403 audit(1747095704.521:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 13 00:21:45.068001 systemd[1]: Successfully loaded SELinux policy in 29.638ms. May 13 00:21:45.068022 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.488ms. May 13 00:21:45.068034 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 13 00:21:45.068046 systemd[1]: Detected virtualization kvm. May 13 00:21:45.068058 systemd[1]: Detected architecture arm64. May 13 00:21:45.068068 systemd[1]: Detected first boot. May 13 00:21:45.068079 systemd[1]: Initializing machine ID from VM UUID. May 13 00:21:45.068090 zram_generator::config[1045]: No configuration found. May 13 00:21:45.068102 systemd[1]: Populated /etc with preset unit settings. May 13 00:21:45.068113 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 13 00:21:45.068123 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 13 00:21:45.068136 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 13 00:21:45.068148 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 13 00:21:45.068161 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 13 00:21:45.068173 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 13 00:21:45.068183 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 13 00:21:45.068195 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 13 00:21:45.068206 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 13 00:21:45.068217 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 13 00:21:45.068228 systemd[1]: Created slice user.slice - User and Session Slice. May 13 00:21:45.068239 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 13 00:21:45.068252 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 13 00:21:45.068263 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 13 00:21:45.068274 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 13 00:21:45.068285 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 13 00:21:45.068296 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 13 00:21:45.068307 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... May 13 00:21:45.068317 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 13 00:21:45.068328 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 13 00:21:45.068339 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 13 00:21:45.068352 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 13 00:21:45.068363 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 13 00:21:45.068375 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 13 00:21:45.068386 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 13 00:21:45.068396 systemd[1]: Reached target slices.target - Slice Units. May 13 00:21:45.068408 systemd[1]: Reached target swap.target - Swaps. May 13 00:21:45.068419 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 13 00:21:45.068430 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 13 00:21:45.068444 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 13 00:21:45.068455 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 13 00:21:45.068466 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 13 00:21:45.068476 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 13 00:21:45.068487 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 13 00:21:45.068498 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 13 00:21:45.068509 systemd[1]: Mounting media.mount - External Media Directory... May 13 00:21:45.068520 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 13 00:21:45.068531 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 13 00:21:45.068543 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 13 00:21:45.068554 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 13 00:21:45.068565 systemd[1]: Reached target machines.target - Containers. May 13 00:21:45.068576 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 13 00:21:45.068587 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 13 00:21:45.068598 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 13 00:21:45.068611 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 13 00:21:45.068622 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 13 00:21:45.068634 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 13 00:21:45.068645 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 13 00:21:45.068656 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 13 00:21:45.068842 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 13 00:21:45.068857 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 13 00:21:45.068869 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 13 00:21:45.068880 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 13 00:21:45.068891 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 13 00:21:45.068902 systemd[1]: Stopped systemd-fsck-usr.service. May 13 00:21:45.068916 kernel: fuse: init (API version 7.39) May 13 00:21:45.068927 systemd[1]: Starting systemd-journald.service - Journal Service... May 13 00:21:45.068937 kernel: loop: module loaded May 13 00:21:45.068947 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 13 00:21:45.068958 kernel: ACPI: bus type drm_connector registered May 13 00:21:45.068969 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 13 00:21:45.068980 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 13 00:21:45.068991 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 13 00:21:45.069001 systemd[1]: verity-setup.service: Deactivated successfully. May 13 00:21:45.069013 systemd[1]: Stopped verity-setup.service. May 13 00:21:45.069026 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 13 00:21:45.069057 systemd-journald[1112]: Collecting audit messages is disabled. May 13 00:21:45.069081 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 13 00:21:45.069093 systemd[1]: Mounted media.mount - External Media Directory. May 13 00:21:45.069105 systemd-journald[1112]: Journal started May 13 00:21:45.069127 systemd-journald[1112]: Runtime Journal (/run/log/journal/39fa052bf65c42a98623465af7ee0640) is 5.9M, max 47.3M, 41.4M free. May 13 00:21:44.898529 systemd[1]: Queued start job for default target multi-user.target. May 13 00:21:44.912041 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 13 00:21:44.912392 systemd[1]: systemd-journald.service: Deactivated successfully. May 13 00:21:45.071684 systemd[1]: Started systemd-journald.service - Journal Service. May 13 00:21:45.071944 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 13 00:21:45.072863 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 13 00:21:45.073756 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 13 00:21:45.074752 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 13 00:21:45.075834 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 13 00:21:45.076965 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 13 00:21:45.077092 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 13 00:21:45.078179 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 00:21:45.078306 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 13 00:21:45.079427 systemd[1]: modprobe@drm.service: Deactivated successfully. May 13 00:21:45.079551 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 13 00:21:45.080590 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 00:21:45.080754 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 13 00:21:45.081872 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 13 00:21:45.082007 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 13 00:21:45.083139 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 00:21:45.083274 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 13 00:21:45.084351 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 13 00:21:45.085413 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 13 00:21:45.086594 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 13 00:21:45.097831 systemd[1]: Reached target network-pre.target - Preparation for Network. May 13 00:21:45.112825 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 13 00:21:45.115084 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 13 00:21:45.115951 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 13 00:21:45.115994 systemd[1]: Reached target local-fs.target - Local File Systems. May 13 00:21:45.117618 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). May 13 00:21:45.119786 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 13 00:21:45.121579 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 13 00:21:45.122466 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 00:21:45.124882 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 13 00:21:45.126863 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 13 00:21:45.128122 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 00:21:45.129215 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 13 00:21:45.132788 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 13 00:21:45.133707 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 13 00:21:45.140480 systemd-journald[1112]: Time spent on flushing to /var/log/journal/39fa052bf65c42a98623465af7ee0640 is 22.488ms for 853 entries. May 13 00:21:45.140480 systemd-journald[1112]: System Journal (/var/log/journal/39fa052bf65c42a98623465af7ee0640) is 8.0M, max 195.6M, 187.6M free. May 13 00:21:45.168427 systemd-journald[1112]: Received client request to flush runtime journal. May 13 00:21:45.168470 kernel: loop0: detected capacity change from 0 to 114432 May 13 00:21:45.140810 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 13 00:21:45.147879 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 13 00:21:45.151846 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 13 00:21:45.154277 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 13 00:21:45.155421 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 13 00:21:45.160709 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 13 00:21:45.162264 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 13 00:21:45.166099 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 13 00:21:45.169894 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 13 00:21:45.173561 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 13 00:21:45.176718 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 13 00:21:45.180842 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... May 13 00:21:45.183826 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 13 00:21:45.192824 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 13 00:21:45.196889 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 13 00:21:45.200688 kernel: loop1: detected capacity change from 0 to 114328 May 13 00:21:45.201075 udevadm[1172]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. May 13 00:21:45.206247 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 13 00:21:45.211971 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. May 13 00:21:45.232649 systemd-tmpfiles[1174]: ACLs are not supported, ignoring. May 13 00:21:45.232685 systemd-tmpfiles[1174]: ACLs are not supported, ignoring. May 13 00:21:45.237780 kernel: loop2: detected capacity change from 0 to 194096 May 13 00:21:45.238243 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 13 00:21:45.285712 kernel: loop3: detected capacity change from 0 to 114432 May 13 00:21:45.289851 kernel: loop4: detected capacity change from 0 to 114328 May 13 00:21:45.299691 kernel: loop5: detected capacity change from 0 to 194096 May 13 00:21:45.303789 (sd-merge)[1180]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. May 13 00:21:45.304176 (sd-merge)[1180]: Merged extensions into '/usr'. May 13 00:21:45.312249 systemd[1]: Reloading requested from client PID 1156 ('systemd-sysext') (unit systemd-sysext.service)... May 13 00:21:45.312270 systemd[1]: Reloading... May 13 00:21:45.373704 zram_generator::config[1202]: No configuration found. May 13 00:21:45.419806 ldconfig[1151]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 13 00:21:45.464376 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 00:21:45.500488 systemd[1]: Reloading finished in 187 ms. May 13 00:21:45.535886 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 13 00:21:45.537028 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 13 00:21:45.552996 systemd[1]: Starting ensure-sysext.service... May 13 00:21:45.554758 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 13 00:21:45.565466 systemd[1]: Reloading requested from client PID 1241 ('systemctl') (unit ensure-sysext.service)... May 13 00:21:45.565480 systemd[1]: Reloading... May 13 00:21:45.579916 systemd-tmpfiles[1242]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 13 00:21:45.580186 systemd-tmpfiles[1242]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 13 00:21:45.580857 systemd-tmpfiles[1242]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 13 00:21:45.581073 systemd-tmpfiles[1242]: ACLs are not supported, ignoring. May 13 00:21:45.581119 systemd-tmpfiles[1242]: ACLs are not supported, ignoring. May 13 00:21:45.587009 systemd-tmpfiles[1242]: Detected autofs mount point /boot during canonicalization of boot. May 13 00:21:45.587022 systemd-tmpfiles[1242]: Skipping /boot May 13 00:21:45.595334 systemd-tmpfiles[1242]: Detected autofs mount point /boot during canonicalization of boot. May 13 00:21:45.595346 systemd-tmpfiles[1242]: Skipping /boot May 13 00:21:45.614690 zram_generator::config[1269]: No configuration found. May 13 00:21:45.698593 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 00:21:45.734382 systemd[1]: Reloading finished in 168 ms. May 13 00:21:45.748586 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 13 00:21:45.762124 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 13 00:21:45.768985 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... May 13 00:21:45.770968 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 13 00:21:45.774563 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 13 00:21:45.781245 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 13 00:21:45.786109 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 13 00:21:45.790941 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 13 00:21:45.794535 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 13 00:21:45.797506 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 13 00:21:45.813982 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 13 00:21:45.816956 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 13 00:21:45.818084 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 00:21:45.821301 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 13 00:21:45.822840 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 13 00:21:45.825027 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 13 00:21:45.826380 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 00:21:45.826499 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 13 00:21:45.828154 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 00:21:45.828275 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 13 00:21:45.831551 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 00:21:45.831731 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 13 00:21:45.837935 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 13 00:21:45.838649 systemd-udevd[1311]: Using default interface naming scheme 'v255'. May 13 00:21:45.839702 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 13 00:21:45.841891 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 13 00:21:45.845025 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 13 00:21:45.845919 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 00:21:45.846549 augenrules[1335]: No rules May 13 00:21:45.847905 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 13 00:21:45.848699 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 13 00:21:45.849660 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. May 13 00:21:45.852831 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 13 00:21:45.854265 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 00:21:45.854381 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 13 00:21:45.855697 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 00:21:45.855831 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 13 00:21:45.857270 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 00:21:45.857428 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 13 00:21:45.865763 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 13 00:21:45.867612 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 13 00:21:45.874989 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 13 00:21:45.877908 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 13 00:21:45.882955 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 13 00:21:45.886934 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 13 00:21:45.887963 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 00:21:45.894537 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 13 00:21:45.895369 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 13 00:21:45.896254 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 13 00:21:45.900521 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 13 00:21:45.901854 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 00:21:45.902037 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 13 00:21:45.904306 systemd[1]: modprobe@drm.service: Deactivated successfully. May 13 00:21:45.904496 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 13 00:21:45.911792 systemd[1]: Finished ensure-sysext.service. May 13 00:21:45.916308 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 00:21:45.916473 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 13 00:21:45.923464 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1352) May 13 00:21:45.922345 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 00:21:45.922489 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 13 00:21:45.939266 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. May 13 00:21:45.948088 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 00:21:45.948150 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 13 00:21:45.955908 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 13 00:21:45.961501 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 13 00:21:45.961903 systemd-resolved[1310]: Positive Trust Anchors: May 13 00:21:45.962145 systemd-resolved[1310]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 13 00:21:45.962230 systemd-resolved[1310]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 13 00:21:45.966856 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 13 00:21:45.971755 systemd-resolved[1310]: Defaulting to hostname 'linux'. May 13 00:21:45.973506 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 13 00:21:45.974454 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 13 00:21:45.995122 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 13 00:21:46.010086 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 13 00:21:46.011492 systemd[1]: Reached target time-set.target - System Time Set. May 13 00:21:46.026077 systemd-networkd[1376]: lo: Link UP May 13 00:21:46.026085 systemd-networkd[1376]: lo: Gained carrier May 13 00:21:46.028756 systemd-networkd[1376]: Enumeration completed May 13 00:21:46.028871 systemd[1]: Started systemd-networkd.service - Network Configuration. May 13 00:21:46.032402 systemd[1]: Reached target network.target - Network. May 13 00:21:46.036111 systemd-networkd[1376]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 00:21:46.036121 systemd-networkd[1376]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 13 00:21:46.038500 systemd-networkd[1376]: eth0: Link UP May 13 00:21:46.038508 systemd-networkd[1376]: eth0: Gained carrier May 13 00:21:46.038521 systemd-networkd[1376]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 00:21:46.039970 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 13 00:21:46.046110 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 00:21:46.053888 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 13 00:21:46.058544 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 13 00:21:46.058744 systemd-networkd[1376]: eth0: DHCPv4 address 10.0.0.71/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 13 00:21:46.061317 systemd-timesyncd[1390]: Network configuration changed, trying to establish connection. May 13 00:21:46.062499 systemd-timesyncd[1390]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 13 00:21:46.062608 systemd-timesyncd[1390]: Initial clock synchronization to Tue 2025-05-13 00:21:46.349134 UTC. May 13 00:21:46.077779 lvm[1402]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 13 00:21:46.096529 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 13 00:21:46.123239 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 13 00:21:46.124748 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 13 00:21:46.127802 systemd[1]: Reached target sysinit.target - System Initialization. May 13 00:21:46.128924 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 13 00:21:46.130157 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 13 00:21:46.131539 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 13 00:21:46.132706 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 13 00:21:46.134058 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 13 00:21:46.135181 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 13 00:21:46.135220 systemd[1]: Reached target paths.target - Path Units. May 13 00:21:46.135898 systemd[1]: Reached target timers.target - Timer Units. May 13 00:21:46.137755 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 13 00:21:46.140193 systemd[1]: Starting docker.socket - Docker Socket for the API... May 13 00:21:46.148595 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 13 00:21:46.150790 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 13 00:21:46.152022 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 13 00:21:46.152880 systemd[1]: Reached target sockets.target - Socket Units. May 13 00:21:46.153541 systemd[1]: Reached target basic.target - Basic System. May 13 00:21:46.154321 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 13 00:21:46.154350 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 13 00:21:46.155267 systemd[1]: Starting containerd.service - containerd container runtime... May 13 00:21:46.157010 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 13 00:21:46.158461 lvm[1410]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 13 00:21:46.160817 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 13 00:21:46.163914 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 13 00:21:46.167514 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 13 00:21:46.168066 jq[1413]: false May 13 00:21:46.168557 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 13 00:21:46.172841 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 13 00:21:46.174630 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 13 00:21:46.177382 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 13 00:21:46.181188 systemd[1]: Starting systemd-logind.service - User Login Management... May 13 00:21:46.181607 dbus-daemon[1412]: [system] SELinux support is enabled May 13 00:21:46.183036 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 13 00:21:46.183424 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 13 00:21:46.188365 systemd[1]: Starting update-engine.service - Update Engine... May 13 00:21:46.189271 extend-filesystems[1414]: Found loop3 May 13 00:21:46.190585 extend-filesystems[1414]: Found loop4 May 13 00:21:46.190585 extend-filesystems[1414]: Found loop5 May 13 00:21:46.190585 extend-filesystems[1414]: Found vda May 13 00:21:46.190585 extend-filesystems[1414]: Found vda1 May 13 00:21:46.190585 extend-filesystems[1414]: Found vda2 May 13 00:21:46.190585 extend-filesystems[1414]: Found vda3 May 13 00:21:46.190585 extend-filesystems[1414]: Found usr May 13 00:21:46.190585 extend-filesystems[1414]: Found vda4 May 13 00:21:46.190585 extend-filesystems[1414]: Found vda6 May 13 00:21:46.190585 extend-filesystems[1414]: Found vda7 May 13 00:21:46.190585 extend-filesystems[1414]: Found vda9 May 13 00:21:46.190585 extend-filesystems[1414]: Checking size of /dev/vda9 May 13 00:21:46.190018 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 13 00:21:46.191320 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 13 00:21:46.196709 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 13 00:21:46.199720 jq[1428]: true May 13 00:21:46.199377 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 13 00:21:46.200691 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 13 00:21:46.204027 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 13 00:21:46.204176 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 13 00:21:46.205510 systemd[1]: motdgen.service: Deactivated successfully. May 13 00:21:46.205780 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 13 00:21:46.215887 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 13 00:21:46.215924 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 13 00:21:46.219969 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 13 00:21:46.219988 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 13 00:21:46.224089 jq[1436]: true May 13 00:21:46.224991 extend-filesystems[1414]: Resized partition /dev/vda9 May 13 00:21:46.236699 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1363) May 13 00:21:46.234989 (ntainerd)[1446]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 13 00:21:46.239193 tar[1433]: linux-arm64/helm May 13 00:21:46.246190 update_engine[1426]: I20250513 00:21:46.245869 1426 main.cc:92] Flatcar Update Engine starting May 13 00:21:46.246534 extend-filesystems[1448]: resize2fs 1.47.1 (20-May-2024) May 13 00:21:46.248802 systemd-logind[1422]: Watching system buttons on /dev/input/event0 (Power Button) May 13 00:21:46.250031 systemd-logind[1422]: New seat seat0. May 13 00:21:46.251263 systemd[1]: Started systemd-logind.service - User Login Management. May 13 00:21:46.253685 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 13 00:21:46.254946 systemd[1]: Started update-engine.service - Update Engine. May 13 00:21:46.255207 update_engine[1426]: I20250513 00:21:46.254993 1426 update_check_scheduler.cc:74] Next update check in 2m23s May 13 00:21:46.266940 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 13 00:21:46.281687 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 13 00:21:46.321920 extend-filesystems[1448]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 13 00:21:46.321920 extend-filesystems[1448]: old_desc_blocks = 1, new_desc_blocks = 1 May 13 00:21:46.321920 extend-filesystems[1448]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 13 00:21:46.326753 extend-filesystems[1414]: Resized filesystem in /dev/vda9 May 13 00:21:46.323500 systemd[1]: extend-filesystems.service: Deactivated successfully. May 13 00:21:46.325728 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 13 00:21:46.334561 bash[1466]: Updated "/home/core/.ssh/authorized_keys" May 13 00:21:46.343715 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 13 00:21:46.345331 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 13 00:21:46.411898 locksmithd[1453]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 13 00:21:46.439276 containerd[1446]: time="2025-05-13T00:21:46.439038560Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 May 13 00:21:46.463370 containerd[1446]: time="2025-05-13T00:21:46.463071840Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 13 00:21:46.464600 containerd[1446]: time="2025-05-13T00:21:46.464565040Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.89-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 13 00:21:46.464777 containerd[1446]: time="2025-05-13T00:21:46.464748320Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 13 00:21:46.464846 containerd[1446]: time="2025-05-13T00:21:46.464831120Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 13 00:21:46.465149 containerd[1446]: time="2025-05-13T00:21:46.465127840Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 May 13 00:21:46.466348 containerd[1446]: time="2025-05-13T00:21:46.465218880Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 May 13 00:21:46.466348 containerd[1446]: time="2025-05-13T00:21:46.465290000Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 May 13 00:21:46.466348 containerd[1446]: time="2025-05-13T00:21:46.465304480Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 13 00:21:46.466348 containerd[1446]: time="2025-05-13T00:21:46.465460960Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 13 00:21:46.466348 containerd[1446]: time="2025-05-13T00:21:46.465477400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 13 00:21:46.466348 containerd[1446]: time="2025-05-13T00:21:46.465489880Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 May 13 00:21:46.466348 containerd[1446]: time="2025-05-13T00:21:46.465499040Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 13 00:21:46.466348 containerd[1446]: time="2025-05-13T00:21:46.465570400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 13 00:21:46.466348 containerd[1446]: time="2025-05-13T00:21:46.465795400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 13 00:21:46.466348 containerd[1446]: time="2025-05-13T00:21:46.465896880Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 13 00:21:46.466348 containerd[1446]: time="2025-05-13T00:21:46.465910040Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 13 00:21:46.466599 containerd[1446]: time="2025-05-13T00:21:46.465992360Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 13 00:21:46.466599 containerd[1446]: time="2025-05-13T00:21:46.466029600Z" level=info msg="metadata content store policy set" policy=shared May 13 00:21:46.469917 containerd[1446]: time="2025-05-13T00:21:46.469889280Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 13 00:21:46.470115 containerd[1446]: time="2025-05-13T00:21:46.470096320Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 13 00:21:46.470291 containerd[1446]: time="2025-05-13T00:21:46.470274320Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 May 13 00:21:46.470420 containerd[1446]: time="2025-05-13T00:21:46.470402160Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 May 13 00:21:46.470495 containerd[1446]: time="2025-05-13T00:21:46.470481440Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 13 00:21:46.470863 containerd[1446]: time="2025-05-13T00:21:46.470809880Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 13 00:21:46.471339 containerd[1446]: time="2025-05-13T00:21:46.471247840Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 13 00:21:46.471744 containerd[1446]: time="2025-05-13T00:21:46.471536440Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 May 13 00:21:46.471744 containerd[1446]: time="2025-05-13T00:21:46.471564480Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 May 13 00:21:46.471744 containerd[1446]: time="2025-05-13T00:21:46.471589560Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 May 13 00:21:46.471744 containerd[1446]: time="2025-05-13T00:21:46.471609080Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 13 00:21:46.471744 containerd[1446]: time="2025-05-13T00:21:46.471621960Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 13 00:21:46.471744 containerd[1446]: time="2025-05-13T00:21:46.471634240Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 13 00:21:46.471744 containerd[1446]: time="2025-05-13T00:21:46.471648200Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 13 00:21:46.471744 containerd[1446]: time="2025-05-13T00:21:46.471661320Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 13 00:21:46.472032 containerd[1446]: time="2025-05-13T00:21:46.472010400Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 13 00:21:46.472093 containerd[1446]: time="2025-05-13T00:21:46.472080280Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 13 00:21:46.472201 containerd[1446]: time="2025-05-13T00:21:46.472184320Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 13 00:21:46.472485 containerd[1446]: time="2025-05-13T00:21:46.472265840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 13 00:21:46.472485 containerd[1446]: time="2025-05-13T00:21:46.472341520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 13 00:21:46.472485 containerd[1446]: time="2025-05-13T00:21:46.472358760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 13 00:21:46.472485 containerd[1446]: time="2025-05-13T00:21:46.472370600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 13 00:21:46.472485 containerd[1446]: time="2025-05-13T00:21:46.472382440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 13 00:21:46.472485 containerd[1446]: time="2025-05-13T00:21:46.472408840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 13 00:21:46.472485 containerd[1446]: time="2025-05-13T00:21:46.472424480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 13 00:21:46.472485 containerd[1446]: time="2025-05-13T00:21:46.472437480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 13 00:21:46.472840 containerd[1446]: time="2025-05-13T00:21:46.472818360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 May 13 00:21:46.472955 containerd[1446]: time="2025-05-13T00:21:46.472937880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 May 13 00:21:46.473015 containerd[1446]: time="2025-05-13T00:21:46.473001800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 13 00:21:46.473341 containerd[1446]: time="2025-05-13T00:21:46.473128000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 May 13 00:21:46.473341 containerd[1446]: time="2025-05-13T00:21:46.473158920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 13 00:21:46.473341 containerd[1446]: time="2025-05-13T00:21:46.473177520Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 May 13 00:21:46.473341 containerd[1446]: time="2025-05-13T00:21:46.473198360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 May 13 00:21:46.473341 containerd[1446]: time="2025-05-13T00:21:46.473209920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 13 00:21:46.473341 containerd[1446]: time="2025-05-13T00:21:46.473220640Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 13 00:21:46.473678 containerd[1446]: time="2025-05-13T00:21:46.473595040Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 13 00:21:46.473793 containerd[1446]: time="2025-05-13T00:21:46.473759240Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 May 13 00:21:46.473911 containerd[1446]: time="2025-05-13T00:21:46.473894040Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 13 00:21:46.474705 containerd[1446]: time="2025-05-13T00:21:46.473960160Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 May 13 00:21:46.474705 containerd[1446]: time="2025-05-13T00:21:46.474024200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 13 00:21:46.474705 containerd[1446]: time="2025-05-13T00:21:46.474042360Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 May 13 00:21:46.474705 containerd[1446]: time="2025-05-13T00:21:46.474055000Z" level=info msg="NRI interface is disabled by configuration." May 13 00:21:46.474705 containerd[1446]: time="2025-05-13T00:21:46.474071680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 13 00:21:46.474862 containerd[1446]: time="2025-05-13T00:21:46.474413000Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 13 00:21:46.474862 containerd[1446]: time="2025-05-13T00:21:46.474469080Z" level=info msg="Connect containerd service" May 13 00:21:46.474862 containerd[1446]: time="2025-05-13T00:21:46.474500040Z" level=info msg="using legacy CRI server" May 13 00:21:46.474862 containerd[1446]: time="2025-05-13T00:21:46.474506800Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 13 00:21:46.474862 containerd[1446]: time="2025-05-13T00:21:46.474582360Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 13 00:21:46.475935 containerd[1446]: time="2025-05-13T00:21:46.475907840Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 13 00:21:46.476438 containerd[1446]: time="2025-05-13T00:21:46.476356920Z" level=info msg="Start subscribing containerd event" May 13 00:21:46.476524 containerd[1446]: time="2025-05-13T00:21:46.476510320Z" level=info msg="Start recovering state" May 13 00:21:46.476918 containerd[1446]: time="2025-05-13T00:21:46.476897720Z" level=info msg="Start event monitor" May 13 00:21:46.477010 containerd[1446]: time="2025-05-13T00:21:46.476986720Z" level=info msg="Start snapshots syncer" May 13 00:21:46.477128 containerd[1446]: time="2025-05-13T00:21:46.477059880Z" level=info msg="Start cni network conf syncer for default" May 13 00:21:46.477646 containerd[1446]: time="2025-05-13T00:21:46.477605360Z" level=info msg="Start streaming server" May 13 00:21:46.477935 containerd[1446]: time="2025-05-13T00:21:46.477644400Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 13 00:21:46.478374 containerd[1446]: time="2025-05-13T00:21:46.478354640Z" level=info msg=serving... address=/run/containerd/containerd.sock May 13 00:21:46.478630 systemd[1]: Started containerd.service - containerd container runtime. May 13 00:21:46.478977 containerd[1446]: time="2025-05-13T00:21:46.478658480Z" level=info msg="containerd successfully booted in 0.042414s" May 13 00:21:46.611083 tar[1433]: linux-arm64/LICENSE May 13 00:21:46.611298 tar[1433]: linux-arm64/README.md May 13 00:21:46.627957 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 13 00:21:47.199931 sshd_keygen[1434]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 13 00:21:47.219124 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 13 00:21:47.233023 systemd[1]: Starting issuegen.service - Generate /run/issue... May 13 00:21:47.238327 systemd[1]: issuegen.service: Deactivated successfully. May 13 00:21:47.238507 systemd[1]: Finished issuegen.service - Generate /run/issue. May 13 00:21:47.240887 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 13 00:21:47.253758 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 13 00:21:47.262988 systemd[1]: Started getty@tty1.service - Getty on tty1. May 13 00:21:47.265123 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. May 13 00:21:47.266145 systemd[1]: Reached target getty.target - Login Prompts. May 13 00:21:47.862890 systemd-networkd[1376]: eth0: Gained IPv6LL May 13 00:21:47.866793 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 13 00:21:47.868576 systemd[1]: Reached target network-online.target - Network is Online. May 13 00:21:47.893048 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... May 13 00:21:47.895332 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 00:21:47.897249 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 13 00:21:47.911890 systemd[1]: coreos-metadata.service: Deactivated successfully. May 13 00:21:47.912080 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. May 13 00:21:47.913657 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 13 00:21:47.916197 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 13 00:21:48.407382 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 00:21:48.409019 systemd[1]: Reached target multi-user.target - Multi-User System. May 13 00:21:48.410815 systemd[1]: Startup finished in 542ms (kernel) + 4.823s (initrd) + 3.921s (userspace) = 9.287s. May 13 00:21:48.411025 (kubelet)[1525]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 00:21:48.890266 kubelet[1525]: E0513 00:21:48.890154 1525 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 00:21:48.892883 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 00:21:48.893036 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 00:21:52.785436 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 13 00:21:52.786537 systemd[1]: Started sshd@0-10.0.0.71:22-10.0.0.1:43392.service - OpenSSH per-connection server daemon (10.0.0.1:43392). May 13 00:21:52.839131 sshd[1539]: Accepted publickey for core from 10.0.0.1 port 43392 ssh2: RSA SHA256:ilwLBGyeejLKSU0doRti0j2W4iQ88Tp+35jhkd0iwiU May 13 00:21:52.840793 sshd[1539]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:21:52.848336 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 13 00:21:52.858906 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 13 00:21:52.860651 systemd-logind[1422]: New session 1 of user core. May 13 00:21:52.867703 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 13 00:21:52.869810 systemd[1]: Starting user@500.service - User Manager for UID 500... May 13 00:21:52.875985 (systemd)[1543]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 13 00:21:52.951754 systemd[1543]: Queued start job for default target default.target. May 13 00:21:52.960573 systemd[1543]: Created slice app.slice - User Application Slice. May 13 00:21:52.960594 systemd[1543]: Reached target paths.target - Paths. May 13 00:21:52.960606 systemd[1543]: Reached target timers.target - Timers. May 13 00:21:52.961830 systemd[1543]: Starting dbus.socket - D-Bus User Message Bus Socket... May 13 00:21:52.971396 systemd[1543]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 13 00:21:52.971457 systemd[1543]: Reached target sockets.target - Sockets. May 13 00:21:52.971469 systemd[1543]: Reached target basic.target - Basic System. May 13 00:21:52.971501 systemd[1543]: Reached target default.target - Main User Target. May 13 00:21:52.971526 systemd[1543]: Startup finished in 90ms. May 13 00:21:52.971805 systemd[1]: Started user@500.service - User Manager for UID 500. May 13 00:21:52.973090 systemd[1]: Started session-1.scope - Session 1 of User core. May 13 00:21:53.032298 systemd[1]: Started sshd@1-10.0.0.71:22-10.0.0.1:43400.service - OpenSSH per-connection server daemon (10.0.0.1:43400). May 13 00:21:53.071071 sshd[1554]: Accepted publickey for core from 10.0.0.1 port 43400 ssh2: RSA SHA256:ilwLBGyeejLKSU0doRti0j2W4iQ88Tp+35jhkd0iwiU May 13 00:21:53.072273 sshd[1554]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:21:53.076739 systemd-logind[1422]: New session 2 of user core. May 13 00:21:53.085817 systemd[1]: Started session-2.scope - Session 2 of User core. May 13 00:21:53.138291 sshd[1554]: pam_unix(sshd:session): session closed for user core May 13 00:21:53.150947 systemd[1]: sshd@1-10.0.0.71:22-10.0.0.1:43400.service: Deactivated successfully. May 13 00:21:53.152392 systemd[1]: session-2.scope: Deactivated successfully. May 13 00:21:53.155638 systemd-logind[1422]: Session 2 logged out. Waiting for processes to exit. May 13 00:21:53.155881 systemd[1]: Started sshd@2-10.0.0.71:22-10.0.0.1:43406.service - OpenSSH per-connection server daemon (10.0.0.1:43406). May 13 00:21:53.157001 systemd-logind[1422]: Removed session 2. May 13 00:21:53.194080 sshd[1561]: Accepted publickey for core from 10.0.0.1 port 43406 ssh2: RSA SHA256:ilwLBGyeejLKSU0doRti0j2W4iQ88Tp+35jhkd0iwiU May 13 00:21:53.195220 sshd[1561]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:21:53.198618 systemd-logind[1422]: New session 3 of user core. May 13 00:21:53.211888 systemd[1]: Started session-3.scope - Session 3 of User core. May 13 00:21:53.259722 sshd[1561]: pam_unix(sshd:session): session closed for user core May 13 00:21:53.267055 systemd[1]: sshd@2-10.0.0.71:22-10.0.0.1:43406.service: Deactivated successfully. May 13 00:21:53.268421 systemd[1]: session-3.scope: Deactivated successfully. May 13 00:21:53.269640 systemd-logind[1422]: Session 3 logged out. Waiting for processes to exit. May 13 00:21:53.270751 systemd[1]: Started sshd@3-10.0.0.71:22-10.0.0.1:43408.service - OpenSSH per-connection server daemon (10.0.0.1:43408). May 13 00:21:53.271448 systemd-logind[1422]: Removed session 3. May 13 00:21:53.310189 sshd[1568]: Accepted publickey for core from 10.0.0.1 port 43408 ssh2: RSA SHA256:ilwLBGyeejLKSU0doRti0j2W4iQ88Tp+35jhkd0iwiU May 13 00:21:53.311376 sshd[1568]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:21:53.314845 systemd-logind[1422]: New session 4 of user core. May 13 00:21:53.325840 systemd[1]: Started session-4.scope - Session 4 of User core. May 13 00:21:53.378251 sshd[1568]: pam_unix(sshd:session): session closed for user core May 13 00:21:53.391917 systemd[1]: sshd@3-10.0.0.71:22-10.0.0.1:43408.service: Deactivated successfully. May 13 00:21:53.393098 systemd[1]: session-4.scope: Deactivated successfully. May 13 00:21:53.394343 systemd-logind[1422]: Session 4 logged out. Waiting for processes to exit. May 13 00:21:53.400936 systemd[1]: Started sshd@4-10.0.0.71:22-10.0.0.1:43414.service - OpenSSH per-connection server daemon (10.0.0.1:43414). May 13 00:21:53.401830 systemd-logind[1422]: Removed session 4. May 13 00:21:53.436691 sshd[1575]: Accepted publickey for core from 10.0.0.1 port 43414 ssh2: RSA SHA256:ilwLBGyeejLKSU0doRti0j2W4iQ88Tp+35jhkd0iwiU May 13 00:21:53.437888 sshd[1575]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:21:53.441262 systemd-logind[1422]: New session 5 of user core. May 13 00:21:53.452899 systemd[1]: Started session-5.scope - Session 5 of User core. May 13 00:21:53.510330 sudo[1578]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 13 00:21:53.510610 sudo[1578]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 00:21:53.525535 sudo[1578]: pam_unix(sudo:session): session closed for user root May 13 00:21:53.527391 sshd[1575]: pam_unix(sshd:session): session closed for user core May 13 00:21:53.536048 systemd[1]: sshd@4-10.0.0.71:22-10.0.0.1:43414.service: Deactivated successfully. May 13 00:21:53.539042 systemd[1]: session-5.scope: Deactivated successfully. May 13 00:21:53.540273 systemd-logind[1422]: Session 5 logged out. Waiting for processes to exit. May 13 00:21:53.541527 systemd[1]: Started sshd@5-10.0.0.71:22-10.0.0.1:43420.service - OpenSSH per-connection server daemon (10.0.0.1:43420). May 13 00:21:53.542244 systemd-logind[1422]: Removed session 5. May 13 00:21:53.581240 sshd[1583]: Accepted publickey for core from 10.0.0.1 port 43420 ssh2: RSA SHA256:ilwLBGyeejLKSU0doRti0j2W4iQ88Tp+35jhkd0iwiU May 13 00:21:53.582616 sshd[1583]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:21:53.586083 systemd-logind[1422]: New session 6 of user core. May 13 00:21:53.598826 systemd[1]: Started session-6.scope - Session 6 of User core. May 13 00:21:53.649447 sudo[1587]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 13 00:21:53.649747 sudo[1587]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 00:21:53.652776 sudo[1587]: pam_unix(sudo:session): session closed for user root May 13 00:21:53.657152 sudo[1586]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules May 13 00:21:53.657402 sudo[1586]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 00:21:53.674116 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... May 13 00:21:53.675129 auditctl[1590]: No rules May 13 00:21:53.675956 systemd[1]: audit-rules.service: Deactivated successfully. May 13 00:21:53.676155 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. May 13 00:21:53.677938 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... May 13 00:21:53.700147 augenrules[1608]: No rules May 13 00:21:53.701324 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. May 13 00:21:53.702874 sudo[1586]: pam_unix(sudo:session): session closed for user root May 13 00:21:53.704346 sshd[1583]: pam_unix(sshd:session): session closed for user core May 13 00:21:53.711070 systemd[1]: sshd@5-10.0.0.71:22-10.0.0.1:43420.service: Deactivated successfully. May 13 00:21:53.713085 systemd[1]: session-6.scope: Deactivated successfully. May 13 00:21:53.714250 systemd-logind[1422]: Session 6 logged out. Waiting for processes to exit. May 13 00:21:53.715303 systemd[1]: Started sshd@6-10.0.0.71:22-10.0.0.1:43432.service - OpenSSH per-connection server daemon (10.0.0.1:43432). May 13 00:21:53.716980 systemd-logind[1422]: Removed session 6. May 13 00:21:53.754890 sshd[1616]: Accepted publickey for core from 10.0.0.1 port 43432 ssh2: RSA SHA256:ilwLBGyeejLKSU0doRti0j2W4iQ88Tp+35jhkd0iwiU May 13 00:21:53.756018 sshd[1616]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:21:53.759566 systemd-logind[1422]: New session 7 of user core. May 13 00:21:53.770909 systemd[1]: Started session-7.scope - Session 7 of User core. May 13 00:21:53.821616 sudo[1619]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 13 00:21:53.822194 sudo[1619]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 00:21:54.118910 systemd[1]: Starting docker.service - Docker Application Container Engine... May 13 00:21:54.119111 (dockerd)[1638]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 13 00:21:54.376006 dockerd[1638]: time="2025-05-13T00:21:54.375880592Z" level=info msg="Starting up" May 13 00:21:54.507969 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport912628937-merged.mount: Deactivated successfully. May 13 00:21:54.521684 dockerd[1638]: time="2025-05-13T00:21:54.521637687Z" level=info msg="Loading containers: start." May 13 00:21:54.604710 kernel: Initializing XFRM netlink socket May 13 00:21:54.670610 systemd-networkd[1376]: docker0: Link UP May 13 00:21:54.685902 dockerd[1638]: time="2025-05-13T00:21:54.685796262Z" level=info msg="Loading containers: done." May 13 00:21:54.697617 dockerd[1638]: time="2025-05-13T00:21:54.697570602Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 13 00:21:54.697752 dockerd[1638]: time="2025-05-13T00:21:54.697664424Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 May 13 00:21:54.697827 dockerd[1638]: time="2025-05-13T00:21:54.697776378Z" level=info msg="Daemon has completed initialization" May 13 00:21:54.724490 dockerd[1638]: time="2025-05-13T00:21:54.724368702Z" level=info msg="API listen on /run/docker.sock" May 13 00:21:54.724709 systemd[1]: Started docker.service - Docker Application Container Engine. May 13 00:21:55.429683 containerd[1446]: time="2025-05-13T00:21:55.429639568Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\"" May 13 00:21:55.503936 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck169284920-merged.mount: Deactivated successfully. May 13 00:21:56.067766 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3819618917.mount: Deactivated successfully. May 13 00:21:57.057580 containerd[1446]: time="2025-05-13T00:21:57.057526283Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:21:57.058114 containerd[1446]: time="2025-05-13T00:21:57.058077916Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.12: active requests=0, bytes read=29794152" May 13 00:21:57.059592 containerd[1446]: time="2025-05-13T00:21:57.059544454Z" level=info msg="ImageCreate event name:\"sha256:afbe230ec4abc2c9e87f7fbe7814bde21dbe30f03252c8861c4ca9510cb43ec6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:21:57.061993 containerd[1446]: time="2025-05-13T00:21:57.061958120Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:21:57.064008 containerd[1446]: time="2025-05-13T00:21:57.063780180Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.12\" with image id \"sha256:afbe230ec4abc2c9e87f7fbe7814bde21dbe30f03252c8861c4ca9510cb43ec6\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\", size \"29790950\" in 1.634069782s" May 13 00:21:57.064008 containerd[1446]: time="2025-05-13T00:21:57.063844622Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\" returns image reference \"sha256:afbe230ec4abc2c9e87f7fbe7814bde21dbe30f03252c8861c4ca9510cb43ec6\"" May 13 00:21:57.082577 containerd[1446]: time="2025-05-13T00:21:57.082539730Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\"" May 13 00:21:58.546380 containerd[1446]: time="2025-05-13T00:21:58.545811204Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:21:58.550108 containerd[1446]: time="2025-05-13T00:21:58.550068198Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.12: active requests=0, bytes read=26855552" May 13 00:21:58.551093 containerd[1446]: time="2025-05-13T00:21:58.551063220Z" level=info msg="ImageCreate event name:\"sha256:3df23260c56ff58d759f8a841c67846184e97ce81a269549ca8d14b36da14c14\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:21:58.553941 containerd[1446]: time="2025-05-13T00:21:58.553908825Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:21:58.555200 containerd[1446]: time="2025-05-13T00:21:58.554990275Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.12\" with image id \"sha256:3df23260c56ff58d759f8a841c67846184e97ce81a269549ca8d14b36da14c14\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\", size \"28297111\" in 1.47241135s" May 13 00:21:58.555200 containerd[1446]: time="2025-05-13T00:21:58.555024192Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\" returns image reference \"sha256:3df23260c56ff58d759f8a841c67846184e97ce81a269549ca8d14b36da14c14\"" May 13 00:21:58.573721 containerd[1446]: time="2025-05-13T00:21:58.573690334Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\"" May 13 00:21:58.992231 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 13 00:21:59.000930 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 00:21:59.097360 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 00:21:59.100906 (kubelet)[1874]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 00:21:59.142910 kubelet[1874]: E0513 00:21:59.142853 1874 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 00:21:59.146173 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 00:21:59.146310 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 00:21:59.554239 containerd[1446]: time="2025-05-13T00:21:59.554191587Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:21:59.555201 containerd[1446]: time="2025-05-13T00:21:59.555169114Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.12: active requests=0, bytes read=16263947" May 13 00:21:59.555905 containerd[1446]: time="2025-05-13T00:21:59.555864900Z" level=info msg="ImageCreate event name:\"sha256:fb0f5dac5fa74463b801d11598454c00462609b582d17052195012e5f682c2ba\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:21:59.558910 containerd[1446]: time="2025-05-13T00:21:59.558860209Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:21:59.560475 containerd[1446]: time="2025-05-13T00:21:59.560426555Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.12\" with image id \"sha256:fb0f5dac5fa74463b801d11598454c00462609b582d17052195012e5f682c2ba\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\", size \"17705524\" in 986.702715ms" May 13 00:21:59.560475 containerd[1446]: time="2025-05-13T00:21:59.560469745Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\" returns image reference \"sha256:fb0f5dac5fa74463b801d11598454c00462609b582d17052195012e5f682c2ba\"" May 13 00:21:59.579377 containerd[1446]: time="2025-05-13T00:21:59.579346802Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\"" May 13 00:22:00.462806 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2824028304.mount: Deactivated successfully. May 13 00:22:00.655828 containerd[1446]: time="2025-05-13T00:22:00.655767190Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:22:00.656905 containerd[1446]: time="2025-05-13T00:22:00.656860050Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.12: active requests=0, bytes read=25775707" May 13 00:22:00.657440 containerd[1446]: time="2025-05-13T00:22:00.657406158Z" level=info msg="ImageCreate event name:\"sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:22:00.659335 containerd[1446]: time="2025-05-13T00:22:00.659301576Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:22:00.660036 containerd[1446]: time="2025-05-13T00:22:00.659990943Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.12\" with image id \"sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed\", repo tag \"registry.k8s.io/kube-proxy:v1.30.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\", size \"25774724\" in 1.080502254s" May 13 00:22:00.660082 containerd[1446]: time="2025-05-13T00:22:00.660037113Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\" returns image reference \"sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed\"" May 13 00:22:00.679023 containerd[1446]: time="2025-05-13T00:22:00.678974181Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" May 13 00:22:01.256065 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount961259597.mount: Deactivated successfully. May 13 00:22:02.017796 containerd[1446]: time="2025-05-13T00:22:02.017741976Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:22:02.018189 containerd[1446]: time="2025-05-13T00:22:02.018134706Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485383" May 13 00:22:02.019115 containerd[1446]: time="2025-05-13T00:22:02.019078166Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:22:02.022894 containerd[1446]: time="2025-05-13T00:22:02.022852693Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:22:02.024117 containerd[1446]: time="2025-05-13T00:22:02.024071157Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.345055048s" May 13 00:22:02.024148 containerd[1446]: time="2025-05-13T00:22:02.024116857Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" May 13 00:22:02.042107 containerd[1446]: time="2025-05-13T00:22:02.042063553Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" May 13 00:22:02.470413 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2274194283.mount: Deactivated successfully. May 13 00:22:02.475103 containerd[1446]: time="2025-05-13T00:22:02.475052815Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:22:02.476143 containerd[1446]: time="2025-05-13T00:22:02.476099251Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268823" May 13 00:22:02.477095 containerd[1446]: time="2025-05-13T00:22:02.477045887Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:22:02.479701 containerd[1446]: time="2025-05-13T00:22:02.479646283Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:22:02.480442 containerd[1446]: time="2025-05-13T00:22:02.480401357Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 438.297168ms" May 13 00:22:02.480487 containerd[1446]: time="2025-05-13T00:22:02.480438938Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" May 13 00:22:02.498257 containerd[1446]: time="2025-05-13T00:22:02.498225423Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" May 13 00:22:02.965666 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount765762382.mount: Deactivated successfully. May 13 00:22:04.610201 containerd[1446]: time="2025-05-13T00:22:04.610140612Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:22:04.610990 containerd[1446]: time="2025-05-13T00:22:04.610962002Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=66191474" May 13 00:22:04.611679 containerd[1446]: time="2025-05-13T00:22:04.611634162Z" level=info msg="ImageCreate event name:\"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:22:04.614661 containerd[1446]: time="2025-05-13T00:22:04.614627324Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:22:04.615911 containerd[1446]: time="2025-05-13T00:22:04.615876251Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"66189079\" in 2.117618482s" May 13 00:22:04.615911 containerd[1446]: time="2025-05-13T00:22:04.615910377Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\"" May 13 00:22:08.619989 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 13 00:22:08.627907 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 00:22:08.644863 systemd[1]: Reloading requested from client PID 2096 ('systemctl') (unit session-7.scope)... May 13 00:22:08.644880 systemd[1]: Reloading... May 13 00:22:08.721344 zram_generator::config[2138]: No configuration found. May 13 00:22:08.845050 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 00:22:08.898284 systemd[1]: Reloading finished in 253 ms. May 13 00:22:08.952501 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 13 00:22:08.954779 systemd[1]: kubelet.service: Deactivated successfully. May 13 00:22:08.955003 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 13 00:22:08.956372 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 00:22:09.044477 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 00:22:09.048959 (kubelet)[2182]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 13 00:22:09.089052 kubelet[2182]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 00:22:09.089052 kubelet[2182]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 13 00:22:09.089052 kubelet[2182]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 00:22:09.089888 kubelet[2182]: I0513 00:22:09.089843 2182 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 13 00:22:09.809913 kubelet[2182]: I0513 00:22:09.809857 2182 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 13 00:22:09.809913 kubelet[2182]: I0513 00:22:09.809891 2182 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 13 00:22:09.810147 kubelet[2182]: I0513 00:22:09.810122 2182 server.go:927] "Client rotation is on, will bootstrap in background" May 13 00:22:09.844139 kubelet[2182]: E0513 00:22:09.844091 2182 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.71:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.71:6443: connect: connection refused May 13 00:22:09.844139 kubelet[2182]: I0513 00:22:09.844092 2182 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 13 00:22:09.854104 kubelet[2182]: I0513 00:22:09.854079 2182 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 13 00:22:09.855363 kubelet[2182]: I0513 00:22:09.855312 2182 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 13 00:22:09.855534 kubelet[2182]: I0513 00:22:09.855364 2182 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 13 00:22:09.855617 kubelet[2182]: I0513 00:22:09.855597 2182 topology_manager.go:138] "Creating topology manager with none policy" May 13 00:22:09.855617 kubelet[2182]: I0513 00:22:09.855608 2182 container_manager_linux.go:301] "Creating device plugin manager" May 13 00:22:09.855903 kubelet[2182]: I0513 00:22:09.855886 2182 state_mem.go:36] "Initialized new in-memory state store" May 13 00:22:09.856831 kubelet[2182]: I0513 00:22:09.856812 2182 kubelet.go:400] "Attempting to sync node with API server" May 13 00:22:09.856868 kubelet[2182]: I0513 00:22:09.856836 2182 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 13 00:22:09.857144 kubelet[2182]: I0513 00:22:09.857134 2182 kubelet.go:312] "Adding apiserver pod source" May 13 00:22:09.857307 kubelet[2182]: I0513 00:22:09.857296 2182 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 13 00:22:09.857600 kubelet[2182]: W0513 00:22:09.857561 2182 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.71:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.71:6443: connect: connection refused May 13 00:22:09.857631 kubelet[2182]: E0513 00:22:09.857614 2182 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.71:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.71:6443: connect: connection refused May 13 00:22:09.857992 kubelet[2182]: W0513 00:22:09.857955 2182 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.71:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.71:6443: connect: connection refused May 13 00:22:09.858229 kubelet[2182]: E0513 00:22:09.858218 2182 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.71:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.71:6443: connect: connection refused May 13 00:22:09.858324 kubelet[2182]: I0513 00:22:09.858236 2182 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" May 13 00:22:09.859078 kubelet[2182]: I0513 00:22:09.858747 2182 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 13 00:22:09.859078 kubelet[2182]: W0513 00:22:09.858855 2182 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 13 00:22:09.860007 kubelet[2182]: I0513 00:22:09.859642 2182 server.go:1264] "Started kubelet" May 13 00:22:09.860007 kubelet[2182]: I0513 00:22:09.859893 2182 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 13 00:22:09.860575 kubelet[2182]: I0513 00:22:09.860406 2182 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 13 00:22:09.860802 kubelet[2182]: I0513 00:22:09.860784 2182 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 13 00:22:09.860995 kubelet[2182]: I0513 00:22:09.860958 2182 server.go:455] "Adding debug handlers to kubelet server" May 13 00:22:09.867392 kubelet[2182]: I0513 00:22:09.863390 2182 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 13 00:22:09.867571 kubelet[2182]: E0513 00:22:09.862294 2182 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.71:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.71:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183eee4f4ee26c08 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-13 00:22:09.859619848 +0000 UTC m=+0.807443193,LastTimestamp:2025-05-13 00:22:09.859619848 +0000 UTC m=+0.807443193,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 13 00:22:09.871781 kubelet[2182]: E0513 00:22:09.869359 2182 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 00:22:09.871781 kubelet[2182]: I0513 00:22:09.869599 2182 volume_manager.go:291] "Starting Kubelet Volume Manager" May 13 00:22:09.871781 kubelet[2182]: I0513 00:22:09.869733 2182 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 13 00:22:09.871781 kubelet[2182]: I0513 00:22:09.870017 2182 reconciler.go:26] "Reconciler: start to sync state" May 13 00:22:09.871781 kubelet[2182]: W0513 00:22:09.870294 2182 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.71:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.71:6443: connect: connection refused May 13 00:22:09.871781 kubelet[2182]: E0513 00:22:09.870334 2182 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.71:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.71:6443: connect: connection refused May 13 00:22:09.873493 kubelet[2182]: E0513 00:22:09.873450 2182 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.71:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.71:6443: connect: connection refused" interval="200ms" May 13 00:22:09.874922 kubelet[2182]: I0513 00:22:09.874879 2182 factory.go:221] Registration of the systemd container factory successfully May 13 00:22:09.875048 kubelet[2182]: I0513 00:22:09.875025 2182 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 13 00:22:09.876302 kubelet[2182]: E0513 00:22:09.876261 2182 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 13 00:22:09.877115 kubelet[2182]: I0513 00:22:09.877085 2182 factory.go:221] Registration of the containerd container factory successfully May 13 00:22:09.885501 kubelet[2182]: I0513 00:22:09.885448 2182 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 13 00:22:09.887799 kubelet[2182]: I0513 00:22:09.887767 2182 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 13 00:22:09.887854 kubelet[2182]: I0513 00:22:09.887806 2182 status_manager.go:217] "Starting to sync pod status with apiserver" May 13 00:22:09.887854 kubelet[2182]: I0513 00:22:09.887826 2182 kubelet.go:2337] "Starting kubelet main sync loop" May 13 00:22:09.887911 kubelet[2182]: E0513 00:22:09.887877 2182 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 13 00:22:09.888371 kubelet[2182]: W0513 00:22:09.888334 2182 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.71:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.71:6443: connect: connection refused May 13 00:22:09.888458 kubelet[2182]: E0513 00:22:09.888385 2182 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.71:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.71:6443: connect: connection refused May 13 00:22:09.892244 kubelet[2182]: I0513 00:22:09.892215 2182 cpu_manager.go:214] "Starting CPU manager" policy="none" May 13 00:22:09.892244 kubelet[2182]: I0513 00:22:09.892232 2182 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 13 00:22:09.892244 kubelet[2182]: I0513 00:22:09.892249 2182 state_mem.go:36] "Initialized new in-memory state store" May 13 00:22:09.894402 kubelet[2182]: I0513 00:22:09.894380 2182 policy_none.go:49] "None policy: Start" May 13 00:22:09.895011 kubelet[2182]: I0513 00:22:09.894991 2182 memory_manager.go:170] "Starting memorymanager" policy="None" May 13 00:22:09.895114 kubelet[2182]: I0513 00:22:09.895018 2182 state_mem.go:35] "Initializing new in-memory state store" May 13 00:22:09.899978 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 13 00:22:09.912095 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 13 00:22:09.914726 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 13 00:22:09.923341 kubelet[2182]: I0513 00:22:09.923314 2182 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 13 00:22:09.923645 kubelet[2182]: I0513 00:22:09.923610 2182 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 13 00:22:09.923977 kubelet[2182]: I0513 00:22:09.923886 2182 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 13 00:22:09.924976 kubelet[2182]: E0513 00:22:09.924950 2182 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 13 00:22:09.971520 kubelet[2182]: I0513 00:22:09.971466 2182 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 13 00:22:09.971874 kubelet[2182]: E0513 00:22:09.971838 2182 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.71:6443/api/v1/nodes\": dial tcp 10.0.0.71:6443: connect: connection refused" node="localhost" May 13 00:22:09.988222 kubelet[2182]: I0513 00:22:09.988105 2182 topology_manager.go:215] "Topology Admit Handler" podUID="98af2381cb416b5fe8063daba5d01d7f" podNamespace="kube-system" podName="kube-apiserver-localhost" May 13 00:22:09.989150 kubelet[2182]: I0513 00:22:09.989112 2182 topology_manager.go:215] "Topology Admit Handler" podUID="b20b39a8540dba87b5883a6f0f602dba" podNamespace="kube-system" podName="kube-controller-manager-localhost" May 13 00:22:09.989948 kubelet[2182]: I0513 00:22:09.989920 2182 topology_manager.go:215] "Topology Admit Handler" podUID="6ece95f10dbffa04b25ec3439a115512" podNamespace="kube-system" podName="kube-scheduler-localhost" May 13 00:22:09.996148 systemd[1]: Created slice kubepods-burstable-pod98af2381cb416b5fe8063daba5d01d7f.slice - libcontainer container kubepods-burstable-pod98af2381cb416b5fe8063daba5d01d7f.slice. May 13 00:22:10.009281 systemd[1]: Created slice kubepods-burstable-podb20b39a8540dba87b5883a6f0f602dba.slice - libcontainer container kubepods-burstable-podb20b39a8540dba87b5883a6f0f602dba.slice. May 13 00:22:10.012285 systemd[1]: Created slice kubepods-burstable-pod6ece95f10dbffa04b25ec3439a115512.slice - libcontainer container kubepods-burstable-pod6ece95f10dbffa04b25ec3439a115512.slice. May 13 00:22:10.071714 kubelet[2182]: I0513 00:22:10.070963 2182 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/98af2381cb416b5fe8063daba5d01d7f-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"98af2381cb416b5fe8063daba5d01d7f\") " pod="kube-system/kube-apiserver-localhost" May 13 00:22:10.071714 kubelet[2182]: I0513 00:22:10.071002 2182 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:22:10.071714 kubelet[2182]: I0513 00:22:10.071023 2182 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:22:10.071714 kubelet[2182]: I0513 00:22:10.071045 2182 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6ece95f10dbffa04b25ec3439a115512-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6ece95f10dbffa04b25ec3439a115512\") " pod="kube-system/kube-scheduler-localhost" May 13 00:22:10.071714 kubelet[2182]: I0513 00:22:10.071062 2182 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/98af2381cb416b5fe8063daba5d01d7f-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"98af2381cb416b5fe8063daba5d01d7f\") " pod="kube-system/kube-apiserver-localhost" May 13 00:22:10.072029 kubelet[2182]: I0513 00:22:10.071081 2182 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:22:10.072029 kubelet[2182]: I0513 00:22:10.071102 2182 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:22:10.072029 kubelet[2182]: I0513 00:22:10.071140 2182 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:22:10.072029 kubelet[2182]: I0513 00:22:10.071170 2182 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/98af2381cb416b5fe8063daba5d01d7f-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"98af2381cb416b5fe8063daba5d01d7f\") " pod="kube-system/kube-apiserver-localhost" May 13 00:22:10.075104 kubelet[2182]: E0513 00:22:10.075056 2182 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.71:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.71:6443: connect: connection refused" interval="400ms" May 13 00:22:10.173139 kubelet[2182]: I0513 00:22:10.173093 2182 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 13 00:22:10.173626 kubelet[2182]: E0513 00:22:10.173589 2182 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.71:6443/api/v1/nodes\": dial tcp 10.0.0.71:6443: connect: connection refused" node="localhost" May 13 00:22:10.307431 kubelet[2182]: E0513 00:22:10.307387 2182 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:22:10.308014 containerd[1446]: time="2025-05-13T00:22:10.307968545Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:98af2381cb416b5fe8063daba5d01d7f,Namespace:kube-system,Attempt:0,}" May 13 00:22:10.312221 kubelet[2182]: E0513 00:22:10.312200 2182 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:22:10.312530 containerd[1446]: time="2025-05-13T00:22:10.312504030Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b20b39a8540dba87b5883a6f0f602dba,Namespace:kube-system,Attempt:0,}" May 13 00:22:10.314939 kubelet[2182]: E0513 00:22:10.314858 2182 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:22:10.315281 containerd[1446]: time="2025-05-13T00:22:10.315249064Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6ece95f10dbffa04b25ec3439a115512,Namespace:kube-system,Attempt:0,}" May 13 00:22:10.475958 kubelet[2182]: E0513 00:22:10.475843 2182 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.71:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.71:6443: connect: connection refused" interval="800ms" May 13 00:22:10.575674 kubelet[2182]: I0513 00:22:10.575626 2182 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 13 00:22:10.575981 kubelet[2182]: E0513 00:22:10.575939 2182 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.71:6443/api/v1/nodes\": dial tcp 10.0.0.71:6443: connect: connection refused" node="localhost" May 13 00:22:10.737175 kubelet[2182]: W0513 00:22:10.737010 2182 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.71:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.71:6443: connect: connection refused May 13 00:22:10.737175 kubelet[2182]: E0513 00:22:10.737081 2182 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.71:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.71:6443: connect: connection refused May 13 00:22:10.833696 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1730793542.mount: Deactivated successfully. May 13 00:22:10.836605 containerd[1446]: time="2025-05-13T00:22:10.836568053Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 00:22:10.839692 containerd[1446]: time="2025-05-13T00:22:10.839244894Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" May 13 00:22:10.843542 containerd[1446]: time="2025-05-13T00:22:10.843499432Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 00:22:10.844927 containerd[1446]: time="2025-05-13T00:22:10.844893906Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 13 00:22:10.845095 containerd[1446]: time="2025-05-13T00:22:10.845063066Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 00:22:10.845833 containerd[1446]: time="2025-05-13T00:22:10.845803815Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 00:22:10.846275 containerd[1446]: time="2025-05-13T00:22:10.846077669Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 13 00:22:10.848043 containerd[1446]: time="2025-05-13T00:22:10.848008753Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 00:22:10.850441 containerd[1446]: time="2025-05-13T00:22:10.850383894Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 542.329926ms" May 13 00:22:10.852985 containerd[1446]: time="2025-05-13T00:22:10.852823180Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 540.259091ms" May 13 00:22:10.853647 containerd[1446]: time="2025-05-13T00:22:10.853613812Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 538.307693ms" May 13 00:22:11.009998 kubelet[2182]: W0513 00:22:11.009918 2182 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.71:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.71:6443: connect: connection refused May 13 00:22:11.009998 kubelet[2182]: E0513 00:22:11.009999 2182 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.71:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.71:6443: connect: connection refused May 13 00:22:11.021725 containerd[1446]: time="2025-05-13T00:22:11.021579464Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:22:11.021891 containerd[1446]: time="2025-05-13T00:22:11.021770261Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:22:11.021891 containerd[1446]: time="2025-05-13T00:22:11.021813284Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:22:11.022056 containerd[1446]: time="2025-05-13T00:22:11.021825261Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:22:11.022056 containerd[1446]: time="2025-05-13T00:22:11.021942111Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:22:11.022240 containerd[1446]: time="2025-05-13T00:22:11.022039652Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:22:11.022240 containerd[1446]: time="2025-05-13T00:22:11.022162270Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:22:11.022992 containerd[1446]: time="2025-05-13T00:22:11.022907993Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:22:11.025803 containerd[1446]: time="2025-05-13T00:22:11.025518023Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:22:11.025858 containerd[1446]: time="2025-05-13T00:22:11.025824347Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:22:11.025956 containerd[1446]: time="2025-05-13T00:22:11.025848663Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:22:11.026454 containerd[1446]: time="2025-05-13T00:22:11.026334528Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:22:11.045850 systemd[1]: Started cri-containerd-cc72151d1215a450c42fe835b4b2d14b8ac69df441da06a6e696eba135f6be7b.scope - libcontainer container cc72151d1215a450c42fe835b4b2d14b8ac69df441da06a6e696eba135f6be7b. May 13 00:22:11.048969 systemd[1]: Started cri-containerd-172c300600a91f7fd102000ee0aa905e5c1ef674b76491d7305ba1d3c25e0722.scope - libcontainer container 172c300600a91f7fd102000ee0aa905e5c1ef674b76491d7305ba1d3c25e0722. May 13 00:22:11.050033 systemd[1]: Started cri-containerd-45f6ac6e3171d654f122ebe983f5d1844041771c94021f45d91d87bf9604717c.scope - libcontainer container 45f6ac6e3171d654f122ebe983f5d1844041771c94021f45d91d87bf9604717c. May 13 00:22:11.083749 containerd[1446]: time="2025-05-13T00:22:11.083628835Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6ece95f10dbffa04b25ec3439a115512,Namespace:kube-system,Attempt:0,} returns sandbox id \"172c300600a91f7fd102000ee0aa905e5c1ef674b76491d7305ba1d3c25e0722\"" May 13 00:22:11.089464 containerd[1446]: time="2025-05-13T00:22:11.084217610Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b20b39a8540dba87b5883a6f0f602dba,Namespace:kube-system,Attempt:0,} returns sandbox id \"cc72151d1215a450c42fe835b4b2d14b8ac69df441da06a6e696eba135f6be7b\"" May 13 00:22:11.089464 containerd[1446]: time="2025-05-13T00:22:11.086310609Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:98af2381cb416b5fe8063daba5d01d7f,Namespace:kube-system,Attempt:0,} returns sandbox id \"45f6ac6e3171d654f122ebe983f5d1844041771c94021f45d91d87bf9604717c\"" May 13 00:22:11.090608 kubelet[2182]: E0513 00:22:11.090556 2182 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:22:11.090937 kubelet[2182]: E0513 00:22:11.090917 2182 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:22:11.091231 kubelet[2182]: E0513 00:22:11.091208 2182 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:22:11.094935 containerd[1446]: time="2025-05-13T00:22:11.094910455Z" level=info msg="CreateContainer within sandbox \"172c300600a91f7fd102000ee0aa905e5c1ef674b76491d7305ba1d3c25e0722\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 13 00:22:11.095006 containerd[1446]: time="2025-05-13T00:22:11.094973307Z" level=info msg="CreateContainer within sandbox \"cc72151d1215a450c42fe835b4b2d14b8ac69df441da06a6e696eba135f6be7b\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 13 00:22:11.095287 containerd[1446]: time="2025-05-13T00:22:11.095250389Z" level=info msg="CreateContainer within sandbox \"45f6ac6e3171d654f122ebe983f5d1844041771c94021f45d91d87bf9604717c\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 13 00:22:11.111775 containerd[1446]: time="2025-05-13T00:22:11.111739490Z" level=info msg="CreateContainer within sandbox \"172c300600a91f7fd102000ee0aa905e5c1ef674b76491d7305ba1d3c25e0722\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"dcb019d3d964eabee49e37dc2efa269059dd9bdf2e79c3366aec3ee8956e1e6b\"" May 13 00:22:11.112740 containerd[1446]: time="2025-05-13T00:22:11.112657303Z" level=info msg="StartContainer for \"dcb019d3d964eabee49e37dc2efa269059dd9bdf2e79c3366aec3ee8956e1e6b\"" May 13 00:22:11.113409 containerd[1446]: time="2025-05-13T00:22:11.113380553Z" level=info msg="CreateContainer within sandbox \"45f6ac6e3171d654f122ebe983f5d1844041771c94021f45d91d87bf9604717c\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"a01664c441a2f3bd254ea3f8d7c2ff8ee2c51c0c23d303c1838d39a4c358f1a8\"" May 13 00:22:11.113800 containerd[1446]: time="2025-05-13T00:22:11.113772922Z" level=info msg="StartContainer for \"a01664c441a2f3bd254ea3f8d7c2ff8ee2c51c0c23d303c1838d39a4c358f1a8\"" May 13 00:22:11.116781 containerd[1446]: time="2025-05-13T00:22:11.116736585Z" level=info msg="CreateContainer within sandbox \"cc72151d1215a450c42fe835b4b2d14b8ac69df441da06a6e696eba135f6be7b\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"375dc39cecf38418c6ad7a90bb93d9aec4e46ac7a9d6ea5d2b3012924eb468fa\"" May 13 00:22:11.117189 containerd[1446]: time="2025-05-13T00:22:11.117167932Z" level=info msg="StartContainer for \"375dc39cecf38418c6ad7a90bb93d9aec4e46ac7a9d6ea5d2b3012924eb468fa\"" May 13 00:22:11.140923 systemd[1]: Started cri-containerd-375dc39cecf38418c6ad7a90bb93d9aec4e46ac7a9d6ea5d2b3012924eb468fa.scope - libcontainer container 375dc39cecf38418c6ad7a90bb93d9aec4e46ac7a9d6ea5d2b3012924eb468fa. May 13 00:22:11.141955 systemd[1]: Started cri-containerd-a01664c441a2f3bd254ea3f8d7c2ff8ee2c51c0c23d303c1838d39a4c358f1a8.scope - libcontainer container a01664c441a2f3bd254ea3f8d7c2ff8ee2c51c0c23d303c1838d39a4c358f1a8. May 13 00:22:11.142814 systemd[1]: Started cri-containerd-dcb019d3d964eabee49e37dc2efa269059dd9bdf2e79c3366aec3ee8956e1e6b.scope - libcontainer container dcb019d3d964eabee49e37dc2efa269059dd9bdf2e79c3366aec3ee8956e1e6b. May 13 00:22:11.175372 kubelet[2182]: W0513 00:22:11.175305 2182 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.71:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.71:6443: connect: connection refused May 13 00:22:11.175372 kubelet[2182]: E0513 00:22:11.175367 2182 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.71:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.71:6443: connect: connection refused May 13 00:22:11.193338 containerd[1446]: time="2025-05-13T00:22:11.193289935Z" level=info msg="StartContainer for \"a01664c441a2f3bd254ea3f8d7c2ff8ee2c51c0c23d303c1838d39a4c358f1a8\" returns successfully" May 13 00:22:11.193706 containerd[1446]: time="2025-05-13T00:22:11.193309043Z" level=info msg="StartContainer for \"dcb019d3d964eabee49e37dc2efa269059dd9bdf2e79c3366aec3ee8956e1e6b\" returns successfully" May 13 00:22:11.193936 containerd[1446]: time="2025-05-13T00:22:11.193315052Z" level=info msg="StartContainer for \"375dc39cecf38418c6ad7a90bb93d9aec4e46ac7a9d6ea5d2b3012924eb468fa\" returns successfully" May 13 00:22:11.243014 kubelet[2182]: W0513 00:22:11.242955 2182 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.71:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.71:6443: connect: connection refused May 13 00:22:11.243135 kubelet[2182]: E0513 00:22:11.243115 2182 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.71:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.71:6443: connect: connection refused May 13 00:22:11.276681 kubelet[2182]: E0513 00:22:11.276553 2182 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.71:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.71:6443: connect: connection refused" interval="1.6s" May 13 00:22:11.378455 kubelet[2182]: I0513 00:22:11.378420 2182 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 13 00:22:11.378778 kubelet[2182]: E0513 00:22:11.378750 2182 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.71:6443/api/v1/nodes\": dial tcp 10.0.0.71:6443: connect: connection refused" node="localhost" May 13 00:22:11.914400 kubelet[2182]: E0513 00:22:11.913905 2182 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:22:11.915733 kubelet[2182]: E0513 00:22:11.915715 2182 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:22:11.920514 kubelet[2182]: E0513 00:22:11.920482 2182 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:22:12.857736 kubelet[2182]: E0513 00:22:12.857683 2182 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found May 13 00:22:12.881485 kubelet[2182]: E0513 00:22:12.881418 2182 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" May 13 00:22:12.921001 kubelet[2182]: E0513 00:22:12.920977 2182 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:22:12.981485 kubelet[2182]: I0513 00:22:12.980605 2182 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 13 00:22:12.987632 kubelet[2182]: I0513 00:22:12.987602 2182 kubelet_node_status.go:76] "Successfully registered node" node="localhost" May 13 00:22:13.860026 kubelet[2182]: I0513 00:22:13.859950 2182 apiserver.go:52] "Watching apiserver" May 13 00:22:13.870321 kubelet[2182]: I0513 00:22:13.870288 2182 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 13 00:22:14.508228 systemd[1]: Reloading requested from client PID 2458 ('systemctl') (unit session-7.scope)... May 13 00:22:14.508245 systemd[1]: Reloading... May 13 00:22:14.582753 zram_generator::config[2498]: No configuration found. May 13 00:22:14.665721 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 00:22:14.730754 systemd[1]: Reloading finished in 222 ms. May 13 00:22:14.764528 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 13 00:22:14.765240 kubelet[2182]: I0513 00:22:14.764786 2182 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 13 00:22:14.779526 systemd[1]: kubelet.service: Deactivated successfully. May 13 00:22:14.779789 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 13 00:22:14.789858 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 00:22:14.880019 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 00:22:14.884047 (kubelet)[2539]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 13 00:22:14.924083 kubelet[2539]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 00:22:14.924909 kubelet[2539]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 13 00:22:14.924909 kubelet[2539]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 00:22:14.924909 kubelet[2539]: I0513 00:22:14.924467 2539 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 13 00:22:14.928433 kubelet[2539]: I0513 00:22:14.928399 2539 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 13 00:22:14.928536 kubelet[2539]: I0513 00:22:14.928465 2539 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 13 00:22:14.928675 kubelet[2539]: I0513 00:22:14.928650 2539 server.go:927] "Client rotation is on, will bootstrap in background" May 13 00:22:14.930173 kubelet[2539]: I0513 00:22:14.930135 2539 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 13 00:22:14.934660 kubelet[2539]: I0513 00:22:14.934521 2539 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 13 00:22:14.939689 kubelet[2539]: I0513 00:22:14.939648 2539 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 13 00:22:14.939905 kubelet[2539]: I0513 00:22:14.939873 2539 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 13 00:22:14.940084 kubelet[2539]: I0513 00:22:14.939908 2539 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 13 00:22:14.940159 kubelet[2539]: I0513 00:22:14.940090 2539 topology_manager.go:138] "Creating topology manager with none policy" May 13 00:22:14.940159 kubelet[2539]: I0513 00:22:14.940100 2539 container_manager_linux.go:301] "Creating device plugin manager" May 13 00:22:14.940159 kubelet[2539]: I0513 00:22:14.940143 2539 state_mem.go:36] "Initialized new in-memory state store" May 13 00:22:14.940266 kubelet[2539]: I0513 00:22:14.940253 2539 kubelet.go:400] "Attempting to sync node with API server" May 13 00:22:14.940308 kubelet[2539]: I0513 00:22:14.940272 2539 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 13 00:22:14.940308 kubelet[2539]: I0513 00:22:14.940318 2539 kubelet.go:312] "Adding apiserver pod source" May 13 00:22:14.940308 kubelet[2539]: I0513 00:22:14.940338 2539 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 13 00:22:14.942802 kubelet[2539]: I0513 00:22:14.942778 2539 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" May 13 00:22:14.945643 kubelet[2539]: I0513 00:22:14.942954 2539 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 13 00:22:14.945643 kubelet[2539]: I0513 00:22:14.943291 2539 server.go:1264] "Started kubelet" May 13 00:22:14.945643 kubelet[2539]: I0513 00:22:14.943556 2539 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 13 00:22:14.945643 kubelet[2539]: I0513 00:22:14.943561 2539 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 13 00:22:14.945643 kubelet[2539]: I0513 00:22:14.943833 2539 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 13 00:22:14.945643 kubelet[2539]: I0513 00:22:14.944659 2539 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 13 00:22:14.957923 kubelet[2539]: E0513 00:22:14.954290 2539 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 00:22:14.957923 kubelet[2539]: I0513 00:22:14.954343 2539 volume_manager.go:291] "Starting Kubelet Volume Manager" May 13 00:22:14.957923 kubelet[2539]: I0513 00:22:14.954449 2539 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 13 00:22:14.957923 kubelet[2539]: I0513 00:22:14.954589 2539 reconciler.go:26] "Reconciler: start to sync state" May 13 00:22:14.963225 kubelet[2539]: I0513 00:22:14.962929 2539 server.go:455] "Adding debug handlers to kubelet server" May 13 00:22:14.963225 kubelet[2539]: I0513 00:22:14.962970 2539 factory.go:221] Registration of the systemd container factory successfully May 13 00:22:14.963225 kubelet[2539]: I0513 00:22:14.963053 2539 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 13 00:22:14.964724 kubelet[2539]: E0513 00:22:14.964623 2539 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 13 00:22:14.965178 kubelet[2539]: I0513 00:22:14.965116 2539 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 13 00:22:14.965298 kubelet[2539]: I0513 00:22:14.965268 2539 factory.go:221] Registration of the containerd container factory successfully May 13 00:22:14.966264 kubelet[2539]: I0513 00:22:14.966228 2539 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 13 00:22:14.966264 kubelet[2539]: I0513 00:22:14.966266 2539 status_manager.go:217] "Starting to sync pod status with apiserver" May 13 00:22:14.966385 kubelet[2539]: I0513 00:22:14.966283 2539 kubelet.go:2337] "Starting kubelet main sync loop" May 13 00:22:14.966385 kubelet[2539]: E0513 00:22:14.966339 2539 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 13 00:22:14.995450 kubelet[2539]: I0513 00:22:14.995327 2539 cpu_manager.go:214] "Starting CPU manager" policy="none" May 13 00:22:14.995450 kubelet[2539]: I0513 00:22:14.995358 2539 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 13 00:22:14.995450 kubelet[2539]: I0513 00:22:14.995381 2539 state_mem.go:36] "Initialized new in-memory state store" May 13 00:22:14.995616 kubelet[2539]: I0513 00:22:14.995560 2539 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 13 00:22:14.995616 kubelet[2539]: I0513 00:22:14.995570 2539 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 13 00:22:14.995616 kubelet[2539]: I0513 00:22:14.995588 2539 policy_none.go:49] "None policy: Start" May 13 00:22:14.996387 kubelet[2539]: I0513 00:22:14.996272 2539 memory_manager.go:170] "Starting memorymanager" policy="None" May 13 00:22:14.996387 kubelet[2539]: I0513 00:22:14.996303 2539 state_mem.go:35] "Initializing new in-memory state store" May 13 00:22:14.996469 kubelet[2539]: I0513 00:22:14.996462 2539 state_mem.go:75] "Updated machine memory state" May 13 00:22:14.999961 kubelet[2539]: I0513 00:22:14.999941 2539 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 13 00:22:15.000304 kubelet[2539]: I0513 00:22:15.000170 2539 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 13 00:22:15.001285 kubelet[2539]: I0513 00:22:15.001270 2539 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 13 00:22:15.059289 kubelet[2539]: I0513 00:22:15.058622 2539 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 13 00:22:15.064562 kubelet[2539]: I0513 00:22:15.064528 2539 kubelet_node_status.go:112] "Node was previously registered" node="localhost" May 13 00:22:15.064638 kubelet[2539]: I0513 00:22:15.064607 2539 kubelet_node_status.go:76] "Successfully registered node" node="localhost" May 13 00:22:15.066815 kubelet[2539]: I0513 00:22:15.066771 2539 topology_manager.go:215] "Topology Admit Handler" podUID="98af2381cb416b5fe8063daba5d01d7f" podNamespace="kube-system" podName="kube-apiserver-localhost" May 13 00:22:15.066885 kubelet[2539]: I0513 00:22:15.066864 2539 topology_manager.go:215] "Topology Admit Handler" podUID="b20b39a8540dba87b5883a6f0f602dba" podNamespace="kube-system" podName="kube-controller-manager-localhost" May 13 00:22:15.066914 kubelet[2539]: I0513 00:22:15.066902 2539 topology_manager.go:215] "Topology Admit Handler" podUID="6ece95f10dbffa04b25ec3439a115512" podNamespace="kube-system" podName="kube-scheduler-localhost" May 13 00:22:15.155974 kubelet[2539]: I0513 00:22:15.155928 2539 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6ece95f10dbffa04b25ec3439a115512-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6ece95f10dbffa04b25ec3439a115512\") " pod="kube-system/kube-scheduler-localhost" May 13 00:22:15.155974 kubelet[2539]: I0513 00:22:15.155970 2539 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:22:15.156131 kubelet[2539]: I0513 00:22:15.155994 2539 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:22:15.156131 kubelet[2539]: I0513 00:22:15.156009 2539 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:22:15.156131 kubelet[2539]: I0513 00:22:15.156026 2539 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:22:15.156131 kubelet[2539]: I0513 00:22:15.156043 2539 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:22:15.156131 kubelet[2539]: I0513 00:22:15.156058 2539 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/98af2381cb416b5fe8063daba5d01d7f-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"98af2381cb416b5fe8063daba5d01d7f\") " pod="kube-system/kube-apiserver-localhost" May 13 00:22:15.156256 kubelet[2539]: I0513 00:22:15.156073 2539 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/98af2381cb416b5fe8063daba5d01d7f-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"98af2381cb416b5fe8063daba5d01d7f\") " pod="kube-system/kube-apiserver-localhost" May 13 00:22:15.156256 kubelet[2539]: I0513 00:22:15.156093 2539 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/98af2381cb416b5fe8063daba5d01d7f-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"98af2381cb416b5fe8063daba5d01d7f\") " pod="kube-system/kube-apiserver-localhost" May 13 00:22:15.380982 kubelet[2539]: E0513 00:22:15.380260 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:22:15.380982 kubelet[2539]: E0513 00:22:15.380792 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:22:15.381189 kubelet[2539]: E0513 00:22:15.380982 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:22:15.941054 kubelet[2539]: I0513 00:22:15.941009 2539 apiserver.go:52] "Watching apiserver" May 13 00:22:15.954839 kubelet[2539]: I0513 00:22:15.954803 2539 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 13 00:22:15.981270 kubelet[2539]: E0513 00:22:15.981234 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:22:15.981461 kubelet[2539]: E0513 00:22:15.981433 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:22:15.990436 kubelet[2539]: E0513 00:22:15.990284 2539 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 13 00:22:15.990781 kubelet[2539]: E0513 00:22:15.990741 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:22:16.016459 kubelet[2539]: I0513 00:22:16.016391 2539 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.016376065 podStartE2EDuration="1.016376065s" podCreationTimestamp="2025-05-13 00:22:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 00:22:16.016258378 +0000 UTC m=+1.128389197" watchObservedRunningTime="2025-05-13 00:22:16.016376065 +0000 UTC m=+1.128506844" May 13 00:22:16.038627 kubelet[2539]: I0513 00:22:16.038565 2539 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.03854947 podStartE2EDuration="1.03854947s" podCreationTimestamp="2025-05-13 00:22:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 00:22:16.027101698 +0000 UTC m=+1.139232477" watchObservedRunningTime="2025-05-13 00:22:16.03854947 +0000 UTC m=+1.150680249" May 13 00:22:16.991172 kubelet[2539]: E0513 00:22:16.991132 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:22:16.991973 kubelet[2539]: E0513 00:22:16.991939 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:22:19.802365 sudo[1619]: pam_unix(sudo:session): session closed for user root May 13 00:22:19.804255 sshd[1616]: pam_unix(sshd:session): session closed for user core May 13 00:22:19.807931 systemd[1]: sshd@6-10.0.0.71:22-10.0.0.1:43432.service: Deactivated successfully. May 13 00:22:19.810053 systemd[1]: session-7.scope: Deactivated successfully. May 13 00:22:19.810341 systemd[1]: session-7.scope: Consumed 6.087s CPU time, 187.1M memory peak, 0B memory swap peak. May 13 00:22:19.811886 systemd-logind[1422]: Session 7 logged out. Waiting for processes to exit. May 13 00:22:19.813416 systemd-logind[1422]: Removed session 7. May 13 00:22:21.811195 kubelet[2539]: E0513 00:22:21.811144 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:22:21.828885 kubelet[2539]: I0513 00:22:21.828806 2539 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=6.826296424 podStartE2EDuration="6.826296424s" podCreationTimestamp="2025-05-13 00:22:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 00:22:16.038676845 +0000 UTC m=+1.150807664" watchObservedRunningTime="2025-05-13 00:22:21.826296424 +0000 UTC m=+6.938427203" May 13 00:22:21.994305 kubelet[2539]: E0513 00:22:21.994248 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:22:23.838377 kubelet[2539]: E0513 00:22:23.838343 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:22:23.999342 kubelet[2539]: E0513 00:22:23.997556 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:22:26.894243 kubelet[2539]: E0513 00:22:26.894048 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:22:29.043865 kubelet[2539]: I0513 00:22:29.043813 2539 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 13 00:22:29.058813 containerd[1446]: time="2025-05-13T00:22:29.058587516Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 13 00:22:29.061374 kubelet[2539]: I0513 00:22:29.059631 2539 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 13 00:22:29.082831 kubelet[2539]: I0513 00:22:29.080931 2539 topology_manager.go:215] "Topology Admit Handler" podUID="704d75ad-5d47-4416-803a-2edbc03b81e9" podNamespace="kube-system" podName="kube-proxy-4x6bg" May 13 00:22:29.095942 systemd[1]: Created slice kubepods-besteffort-pod704d75ad_5d47_4416_803a_2edbc03b81e9.slice - libcontainer container kubepods-besteffort-pod704d75ad_5d47_4416_803a_2edbc03b81e9.slice. May 13 00:22:29.250727 kubelet[2539]: I0513 00:22:29.250662 2539 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/704d75ad-5d47-4416-803a-2edbc03b81e9-xtables-lock\") pod \"kube-proxy-4x6bg\" (UID: \"704d75ad-5d47-4416-803a-2edbc03b81e9\") " pod="kube-system/kube-proxy-4x6bg" May 13 00:22:29.250727 kubelet[2539]: I0513 00:22:29.250727 2539 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wkk4q\" (UniqueName: \"kubernetes.io/projected/704d75ad-5d47-4416-803a-2edbc03b81e9-kube-api-access-wkk4q\") pod \"kube-proxy-4x6bg\" (UID: \"704d75ad-5d47-4416-803a-2edbc03b81e9\") " pod="kube-system/kube-proxy-4x6bg" May 13 00:22:29.250976 kubelet[2539]: I0513 00:22:29.250751 2539 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/704d75ad-5d47-4416-803a-2edbc03b81e9-kube-proxy\") pod \"kube-proxy-4x6bg\" (UID: \"704d75ad-5d47-4416-803a-2edbc03b81e9\") " pod="kube-system/kube-proxy-4x6bg" May 13 00:22:29.250976 kubelet[2539]: I0513 00:22:29.250871 2539 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/704d75ad-5d47-4416-803a-2edbc03b81e9-lib-modules\") pod \"kube-proxy-4x6bg\" (UID: \"704d75ad-5d47-4416-803a-2edbc03b81e9\") " pod="kube-system/kube-proxy-4x6bg" May 13 00:22:29.405227 kubelet[2539]: E0513 00:22:29.405134 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:22:29.410865 containerd[1446]: time="2025-05-13T00:22:29.410816479Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4x6bg,Uid:704d75ad-5d47-4416-803a-2edbc03b81e9,Namespace:kube-system,Attempt:0,}" May 13 00:22:29.600752 containerd[1446]: time="2025-05-13T00:22:29.600556706Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:22:29.600752 containerd[1446]: time="2025-05-13T00:22:29.600616009Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:22:29.601612 containerd[1446]: time="2025-05-13T00:22:29.600707966Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:22:29.604289 containerd[1446]: time="2025-05-13T00:22:29.604183819Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:22:29.637832 systemd[1]: Started cri-containerd-5764afd223529c4c7f7acc7019895dbe37b40e95d93b49bc9a958be81177cfc4.scope - libcontainer container 5764afd223529c4c7f7acc7019895dbe37b40e95d93b49bc9a958be81177cfc4. May 13 00:22:29.659048 containerd[1446]: time="2025-05-13T00:22:29.658938379Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4x6bg,Uid:704d75ad-5d47-4416-803a-2edbc03b81e9,Namespace:kube-system,Attempt:0,} returns sandbox id \"5764afd223529c4c7f7acc7019895dbe37b40e95d93b49bc9a958be81177cfc4\"" May 13 00:22:29.663388 kubelet[2539]: E0513 00:22:29.663358 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:22:29.666476 containerd[1446]: time="2025-05-13T00:22:29.666442785Z" level=info msg="CreateContainer within sandbox \"5764afd223529c4c7f7acc7019895dbe37b40e95d93b49bc9a958be81177cfc4\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 13 00:22:29.685888 containerd[1446]: time="2025-05-13T00:22:29.685839650Z" level=info msg="CreateContainer within sandbox \"5764afd223529c4c7f7acc7019895dbe37b40e95d93b49bc9a958be81177cfc4\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"6327706e660eab0bd1165bd5932528af409eb99446b8125b8f7208282c044744\"" May 13 00:22:29.688375 containerd[1446]: time="2025-05-13T00:22:29.688347201Z" level=info msg="StartContainer for \"6327706e660eab0bd1165bd5932528af409eb99446b8125b8f7208282c044744\"" May 13 00:22:29.721838 systemd[1]: Started cri-containerd-6327706e660eab0bd1165bd5932528af409eb99446b8125b8f7208282c044744.scope - libcontainer container 6327706e660eab0bd1165bd5932528af409eb99446b8125b8f7208282c044744. May 13 00:22:29.730703 kubelet[2539]: I0513 00:22:29.728457 2539 topology_manager.go:215] "Topology Admit Handler" podUID="2b9d757c-4b78-420d-a28a-1c87847a9fee" podNamespace="tigera-operator" podName="tigera-operator-797db67f8-77gdk" May 13 00:22:29.741320 systemd[1]: Created slice kubepods-besteffort-pod2b9d757c_4b78_420d_a28a_1c87847a9fee.slice - libcontainer container kubepods-besteffort-pod2b9d757c_4b78_420d_a28a_1c87847a9fee.slice. May 13 00:22:29.754623 kubelet[2539]: I0513 00:22:29.754569 2539 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lx4ww\" (UniqueName: \"kubernetes.io/projected/2b9d757c-4b78-420d-a28a-1c87847a9fee-kube-api-access-lx4ww\") pod \"tigera-operator-797db67f8-77gdk\" (UID: \"2b9d757c-4b78-420d-a28a-1c87847a9fee\") " pod="tigera-operator/tigera-operator-797db67f8-77gdk" May 13 00:22:29.754623 kubelet[2539]: I0513 00:22:29.754623 2539 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/2b9d757c-4b78-420d-a28a-1c87847a9fee-var-lib-calico\") pod \"tigera-operator-797db67f8-77gdk\" (UID: \"2b9d757c-4b78-420d-a28a-1c87847a9fee\") " pod="tigera-operator/tigera-operator-797db67f8-77gdk" May 13 00:22:29.761779 containerd[1446]: time="2025-05-13T00:22:29.761646690Z" level=info msg="StartContainer for \"6327706e660eab0bd1165bd5932528af409eb99446b8125b8f7208282c044744\" returns successfully" May 13 00:22:30.008013 kubelet[2539]: E0513 00:22:30.007979 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:22:30.017022 kubelet[2539]: I0513 00:22:30.016967 2539 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-4x6bg" podStartSLOduration=1.016951984 podStartE2EDuration="1.016951984s" podCreationTimestamp="2025-05-13 00:22:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 00:22:30.016765314 +0000 UTC m=+15.128896133" watchObservedRunningTime="2025-05-13 00:22:30.016951984 +0000 UTC m=+15.129082763" May 13 00:22:30.046441 containerd[1446]: time="2025-05-13T00:22:30.046373705Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-797db67f8-77gdk,Uid:2b9d757c-4b78-420d-a28a-1c87847a9fee,Namespace:tigera-operator,Attempt:0,}" May 13 00:22:30.071470 containerd[1446]: time="2025-05-13T00:22:30.071371126Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:22:30.071470 containerd[1446]: time="2025-05-13T00:22:30.071432109Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:22:30.071470 containerd[1446]: time="2025-05-13T00:22:30.071447995Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:22:30.071869 containerd[1446]: time="2025-05-13T00:22:30.071536588Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:22:30.088832 systemd[1]: Started cri-containerd-d4d7a19a740652fe79430acb5184097c41bca456fc08f4d4c15be079cee9d63c.scope - libcontainer container d4d7a19a740652fe79430acb5184097c41bca456fc08f4d4c15be079cee9d63c. May 13 00:22:30.121291 containerd[1446]: time="2025-05-13T00:22:30.120973380Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-797db67f8-77gdk,Uid:2b9d757c-4b78-420d-a28a-1c87847a9fee,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"d4d7a19a740652fe79430acb5184097c41bca456fc08f4d4c15be079cee9d63c\"" May 13 00:22:30.123745 containerd[1446]: time="2025-05-13T00:22:30.123708086Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\"" May 13 00:22:31.125087 update_engine[1426]: I20250513 00:22:31.125012 1426 update_attempter.cc:509] Updating boot flags... May 13 00:22:31.159689 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2881) May 13 00:22:31.204791 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2788) May 13 00:22:31.236753 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2788) May 13 00:22:31.279715 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2277404271.mount: Deactivated successfully. May 13 00:22:31.566735 containerd[1446]: time="2025-05-13T00:22:31.566681171Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:22:31.567479 containerd[1446]: time="2025-05-13T00:22:31.567435640Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.7: active requests=0, bytes read=19323084" May 13 00:22:31.568108 containerd[1446]: time="2025-05-13T00:22:31.568073347Z" level=info msg="ImageCreate event name:\"sha256:27f7c2cfac802523e44ecd16453a4cc992f6c7d610c13054f2715a7cb4370565\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:22:31.570348 containerd[1446]: time="2025-05-13T00:22:31.570310545Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:22:31.571206 containerd[1446]: time="2025-05-13T00:22:31.571169011Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.7\" with image id \"sha256:27f7c2cfac802523e44ecd16453a4cc992f6c7d610c13054f2715a7cb4370565\", repo tag \"quay.io/tigera/operator:v1.36.7\", repo digest \"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\", size \"19319079\" in 1.447420109s" May 13 00:22:31.571248 containerd[1446]: time="2025-05-13T00:22:31.571206144Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\" returns image reference \"sha256:27f7c2cfac802523e44ecd16453a4cc992f6c7d610c13054f2715a7cb4370565\"" May 13 00:22:31.584015 containerd[1446]: time="2025-05-13T00:22:31.583866618Z" level=info msg="CreateContainer within sandbox \"d4d7a19a740652fe79430acb5184097c41bca456fc08f4d4c15be079cee9d63c\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" May 13 00:22:31.601997 containerd[1446]: time="2025-05-13T00:22:31.601735270Z" level=info msg="CreateContainer within sandbox \"d4d7a19a740652fe79430acb5184097c41bca456fc08f4d4c15be079cee9d63c\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"27f40bdd8ef3ce0d534ae7ec53325c043fd1e2d57bdacc5f333b373501b309cd\"" May 13 00:22:31.603698 containerd[1446]: time="2025-05-13T00:22:31.602326441Z" level=info msg="StartContainer for \"27f40bdd8ef3ce0d534ae7ec53325c043fd1e2d57bdacc5f333b373501b309cd\"" May 13 00:22:31.639968 systemd[1]: Started cri-containerd-27f40bdd8ef3ce0d534ae7ec53325c043fd1e2d57bdacc5f333b373501b309cd.scope - libcontainer container 27f40bdd8ef3ce0d534ae7ec53325c043fd1e2d57bdacc5f333b373501b309cd. May 13 00:22:31.695978 containerd[1446]: time="2025-05-13T00:22:31.695932618Z" level=info msg="StartContainer for \"27f40bdd8ef3ce0d534ae7ec53325c043fd1e2d57bdacc5f333b373501b309cd\" returns successfully" May 13 00:22:35.059597 kubelet[2539]: I0513 00:22:35.059535 2539 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-797db67f8-77gdk" podStartSLOduration=4.603044199 podStartE2EDuration="6.059516488s" podCreationTimestamp="2025-05-13 00:22:29 +0000 UTC" firstStartedPulling="2025-05-13 00:22:30.122088958 +0000 UTC m=+15.234219737" lastFinishedPulling="2025-05-13 00:22:31.578561247 +0000 UTC m=+16.690692026" observedRunningTime="2025-05-13 00:22:32.036414485 +0000 UTC m=+17.148545264" watchObservedRunningTime="2025-05-13 00:22:35.059516488 +0000 UTC m=+20.171647227" May 13 00:22:35.061786 kubelet[2539]: I0513 00:22:35.060369 2539 topology_manager.go:215] "Topology Admit Handler" podUID="a297cd12-7aa3-4d71-8952-223bcd91c1ec" podNamespace="calico-system" podName="calico-typha-5fb48f5bf9-mjgw4" May 13 00:22:35.072827 systemd[1]: Created slice kubepods-besteffort-poda297cd12_7aa3_4d71_8952_223bcd91c1ec.slice - libcontainer container kubepods-besteffort-poda297cd12_7aa3_4d71_8952_223bcd91c1ec.slice. May 13 00:22:35.120715 kubelet[2539]: I0513 00:22:35.120658 2539 topology_manager.go:215] "Topology Admit Handler" podUID="72b68459-1919-4966-b93b-32e458d8cf96" podNamespace="calico-system" podName="calico-node-qpwwj" May 13 00:22:35.131717 systemd[1]: Created slice kubepods-besteffort-pod72b68459_1919_4966_b93b_32e458d8cf96.slice - libcontainer container kubepods-besteffort-pod72b68459_1919_4966_b93b_32e458d8cf96.slice. May 13 00:22:35.188612 kubelet[2539]: I0513 00:22:35.188452 2539 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a297cd12-7aa3-4d71-8952-223bcd91c1ec-tigera-ca-bundle\") pod \"calico-typha-5fb48f5bf9-mjgw4\" (UID: \"a297cd12-7aa3-4d71-8952-223bcd91c1ec\") " pod="calico-system/calico-typha-5fb48f5bf9-mjgw4" May 13 00:22:35.188612 kubelet[2539]: I0513 00:22:35.188496 2539 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/a297cd12-7aa3-4d71-8952-223bcd91c1ec-typha-certs\") pod \"calico-typha-5fb48f5bf9-mjgw4\" (UID: \"a297cd12-7aa3-4d71-8952-223bcd91c1ec\") " pod="calico-system/calico-typha-5fb48f5bf9-mjgw4" May 13 00:22:35.188612 kubelet[2539]: I0513 00:22:35.188531 2539 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lklqb\" (UniqueName: \"kubernetes.io/projected/a297cd12-7aa3-4d71-8952-223bcd91c1ec-kube-api-access-lklqb\") pod \"calico-typha-5fb48f5bf9-mjgw4\" (UID: \"a297cd12-7aa3-4d71-8952-223bcd91c1ec\") " pod="calico-system/calico-typha-5fb48f5bf9-mjgw4" May 13 00:22:35.245057 kubelet[2539]: I0513 00:22:35.244986 2539 topology_manager.go:215] "Topology Admit Handler" podUID="8de2152a-6cd4-4599-a610-aac788d746cf" podNamespace="calico-system" podName="csi-node-driver-qpdwm" May 13 00:22:35.245321 kubelet[2539]: E0513 00:22:35.245284 2539 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qpdwm" podUID="8de2152a-6cd4-4599-a610-aac788d746cf" May 13 00:22:35.289294 kubelet[2539]: I0513 00:22:35.289253 2539 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/72b68459-1919-4966-b93b-32e458d8cf96-node-certs\") pod \"calico-node-qpwwj\" (UID: \"72b68459-1919-4966-b93b-32e458d8cf96\") " pod="calico-system/calico-node-qpwwj" May 13 00:22:35.289294 kubelet[2539]: I0513 00:22:35.289295 2539 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/72b68459-1919-4966-b93b-32e458d8cf96-var-run-calico\") pod \"calico-node-qpwwj\" (UID: \"72b68459-1919-4966-b93b-32e458d8cf96\") " pod="calico-system/calico-node-qpwwj" May 13 00:22:35.289294 kubelet[2539]: I0513 00:22:35.289317 2539 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/72b68459-1919-4966-b93b-32e458d8cf96-var-lib-calico\") pod \"calico-node-qpwwj\" (UID: \"72b68459-1919-4966-b93b-32e458d8cf96\") " pod="calico-system/calico-node-qpwwj" May 13 00:22:35.290201 kubelet[2539]: I0513 00:22:35.289340 2539 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/72b68459-1919-4966-b93b-32e458d8cf96-cni-log-dir\") pod \"calico-node-qpwwj\" (UID: \"72b68459-1919-4966-b93b-32e458d8cf96\") " pod="calico-system/calico-node-qpwwj" May 13 00:22:35.290201 kubelet[2539]: I0513 00:22:35.289358 2539 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/72b68459-1919-4966-b93b-32e458d8cf96-xtables-lock\") pod \"calico-node-qpwwj\" (UID: \"72b68459-1919-4966-b93b-32e458d8cf96\") " pod="calico-system/calico-node-qpwwj" May 13 00:22:35.290201 kubelet[2539]: I0513 00:22:35.289373 2539 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/72b68459-1919-4966-b93b-32e458d8cf96-policysync\") pod \"calico-node-qpwwj\" (UID: \"72b68459-1919-4966-b93b-32e458d8cf96\") " pod="calico-system/calico-node-qpwwj" May 13 00:22:35.290201 kubelet[2539]: I0513 00:22:35.289389 2539 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/72b68459-1919-4966-b93b-32e458d8cf96-tigera-ca-bundle\") pod \"calico-node-qpwwj\" (UID: \"72b68459-1919-4966-b93b-32e458d8cf96\") " pod="calico-system/calico-node-qpwwj" May 13 00:22:35.290201 kubelet[2539]: I0513 00:22:35.289456 2539 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/72b68459-1919-4966-b93b-32e458d8cf96-flexvol-driver-host\") pod \"calico-node-qpwwj\" (UID: \"72b68459-1919-4966-b93b-32e458d8cf96\") " pod="calico-system/calico-node-qpwwj" May 13 00:22:35.290329 kubelet[2539]: I0513 00:22:35.289524 2539 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/72b68459-1919-4966-b93b-32e458d8cf96-lib-modules\") pod \"calico-node-qpwwj\" (UID: \"72b68459-1919-4966-b93b-32e458d8cf96\") " pod="calico-system/calico-node-qpwwj" May 13 00:22:35.290329 kubelet[2539]: I0513 00:22:35.289944 2539 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/72b68459-1919-4966-b93b-32e458d8cf96-cni-bin-dir\") pod \"calico-node-qpwwj\" (UID: \"72b68459-1919-4966-b93b-32e458d8cf96\") " pod="calico-system/calico-node-qpwwj" May 13 00:22:35.290329 kubelet[2539]: I0513 00:22:35.289966 2539 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trfjj\" (UniqueName: \"kubernetes.io/projected/72b68459-1919-4966-b93b-32e458d8cf96-kube-api-access-trfjj\") pod \"calico-node-qpwwj\" (UID: \"72b68459-1919-4966-b93b-32e458d8cf96\") " pod="calico-system/calico-node-qpwwj" May 13 00:22:35.290329 kubelet[2539]: I0513 00:22:35.289995 2539 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/72b68459-1919-4966-b93b-32e458d8cf96-cni-net-dir\") pod \"calico-node-qpwwj\" (UID: \"72b68459-1919-4966-b93b-32e458d8cf96\") " pod="calico-system/calico-node-qpwwj" May 13 00:22:35.378381 kubelet[2539]: E0513 00:22:35.378271 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:22:35.379470 containerd[1446]: time="2025-05-13T00:22:35.378928763Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5fb48f5bf9-mjgw4,Uid:a297cd12-7aa3-4d71-8952-223bcd91c1ec,Namespace:calico-system,Attempt:0,}" May 13 00:22:35.391187 kubelet[2539]: I0513 00:22:35.391132 2539 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/8de2152a-6cd4-4599-a610-aac788d746cf-registration-dir\") pod \"csi-node-driver-qpdwm\" (UID: \"8de2152a-6cd4-4599-a610-aac788d746cf\") " pod="calico-system/csi-node-driver-qpdwm" May 13 00:22:35.391187 kubelet[2539]: I0513 00:22:35.391178 2539 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/8de2152a-6cd4-4599-a610-aac788d746cf-varrun\") pod \"csi-node-driver-qpdwm\" (UID: \"8de2152a-6cd4-4599-a610-aac788d746cf\") " pod="calico-system/csi-node-driver-qpdwm" May 13 00:22:35.391320 kubelet[2539]: I0513 00:22:35.391279 2539 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/8de2152a-6cd4-4599-a610-aac788d746cf-socket-dir\") pod \"csi-node-driver-qpdwm\" (UID: \"8de2152a-6cd4-4599-a610-aac788d746cf\") " pod="calico-system/csi-node-driver-qpdwm" May 13 00:22:35.391320 kubelet[2539]: I0513 00:22:35.391295 2539 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rpnb9\" (UniqueName: \"kubernetes.io/projected/8de2152a-6cd4-4599-a610-aac788d746cf-kube-api-access-rpnb9\") pod \"csi-node-driver-qpdwm\" (UID: \"8de2152a-6cd4-4599-a610-aac788d746cf\") " pod="calico-system/csi-node-driver-qpdwm" May 13 00:22:35.391397 kubelet[2539]: I0513 00:22:35.391335 2539 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8de2152a-6cd4-4599-a610-aac788d746cf-kubelet-dir\") pod \"csi-node-driver-qpdwm\" (UID: \"8de2152a-6cd4-4599-a610-aac788d746cf\") " pod="calico-system/csi-node-driver-qpdwm" May 13 00:22:35.399290 kubelet[2539]: E0513 00:22:35.397594 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:22:35.399290 kubelet[2539]: W0513 00:22:35.397616 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:22:35.399290 kubelet[2539]: E0513 00:22:35.397638 2539 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:22:35.403558 containerd[1446]: time="2025-05-13T00:22:35.403429416Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:22:35.403558 containerd[1446]: time="2025-05-13T00:22:35.403495876Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:22:35.403558 containerd[1446]: time="2025-05-13T00:22:35.403511040Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:22:35.403785 containerd[1446]: time="2025-05-13T00:22:35.403603587Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:22:35.411158 kubelet[2539]: E0513 00:22:35.411078 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:22:35.411158 kubelet[2539]: W0513 00:22:35.411101 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:22:35.411158 kubelet[2539]: E0513 00:22:35.411121 2539 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:22:35.427845 systemd[1]: Started cri-containerd-80f95fe3490b4762c83e1f9a25d88d981575fb1075c0f1cf162292e2034dff08.scope - libcontainer container 80f95fe3490b4762c83e1f9a25d88d981575fb1075c0f1cf162292e2034dff08. May 13 00:22:35.434780 kubelet[2539]: E0513 00:22:35.434605 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:22:35.435349 containerd[1446]: time="2025-05-13T00:22:35.435307589Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-qpwwj,Uid:72b68459-1919-4966-b93b-32e458d8cf96,Namespace:calico-system,Attempt:0,}" May 13 00:22:35.458433 containerd[1446]: time="2025-05-13T00:22:35.458309204Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:22:35.458433 containerd[1446]: time="2025-05-13T00:22:35.458385986Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:22:35.458433 containerd[1446]: time="2025-05-13T00:22:35.458407032Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:22:35.458753 containerd[1446]: time="2025-05-13T00:22:35.458496898Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:22:35.459706 containerd[1446]: time="2025-05-13T00:22:35.459228553Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5fb48f5bf9-mjgw4,Uid:a297cd12-7aa3-4d71-8952-223bcd91c1ec,Namespace:calico-system,Attempt:0,} returns sandbox id \"80f95fe3490b4762c83e1f9a25d88d981575fb1075c0f1cf162292e2034dff08\"" May 13 00:22:35.460464 kubelet[2539]: E0513 00:22:35.460427 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:22:35.463714 containerd[1446]: time="2025-05-13T00:22:35.463488360Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\"" May 13 00:22:35.484851 systemd[1]: Started cri-containerd-468773c23983433f4b9050eb1fd4d957934787b0734b7d473c4f629680b44838.scope - libcontainer container 468773c23983433f4b9050eb1fd4d957934787b0734b7d473c4f629680b44838. May 13 00:22:35.492183 kubelet[2539]: E0513 00:22:35.492159 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:22:35.492498 kubelet[2539]: W0513 00:22:35.492321 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:22:35.492498 kubelet[2539]: E0513 00:22:35.492347 2539 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:22:35.492731 kubelet[2539]: E0513 00:22:35.492702 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:22:35.492731 kubelet[2539]: W0513 00:22:35.492717 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:22:35.492914 kubelet[2539]: E0513 00:22:35.492788 2539 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:22:35.493351 kubelet[2539]: E0513 00:22:35.493235 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:22:35.493351 kubelet[2539]: W0513 00:22:35.493251 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:22:35.493351 kubelet[2539]: E0513 00:22:35.493266 2539 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:22:35.493602 kubelet[2539]: E0513 00:22:35.493562 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:22:35.493602 kubelet[2539]: W0513 00:22:35.493575 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:22:35.493803 kubelet[2539]: E0513 00:22:35.493680 2539 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:22:35.494218 kubelet[2539]: E0513 00:22:35.494197 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:22:35.494439 kubelet[2539]: W0513 00:22:35.494299 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:22:35.494439 kubelet[2539]: E0513 00:22:35.494323 2539 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:22:35.494677 kubelet[2539]: E0513 00:22:35.494652 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:22:35.494784 kubelet[2539]: W0513 00:22:35.494768 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:22:35.494909 kubelet[2539]: E0513 00:22:35.494851 2539 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:22:35.495230 kubelet[2539]: E0513 00:22:35.495214 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:22:35.495380 kubelet[2539]: W0513 00:22:35.495308 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:22:35.495380 kubelet[2539]: E0513 00:22:35.495333 2539 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:22:35.495890 kubelet[2539]: E0513 00:22:35.495715 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:22:35.495890 kubelet[2539]: W0513 00:22:35.495730 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:22:35.495890 kubelet[2539]: E0513 00:22:35.495743 2539 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:22:35.496185 kubelet[2539]: E0513 00:22:35.496158 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:22:35.496276 kubelet[2539]: W0513 00:22:35.496262 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:22:35.496472 kubelet[2539]: E0513 00:22:35.496401 2539 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:22:35.496711 kubelet[2539]: E0513 00:22:35.496694 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:22:35.496868 kubelet[2539]: W0513 00:22:35.496796 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:22:35.496913 kubelet[2539]: E0513 00:22:35.496876 2539 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:22:35.497791 kubelet[2539]: E0513 00:22:35.497694 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:22:35.497791 kubelet[2539]: W0513 00:22:35.497710 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:22:35.497791 kubelet[2539]: E0513 00:22:35.497787 2539 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:22:35.498251 kubelet[2539]: E0513 00:22:35.498134 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:22:35.498251 kubelet[2539]: W0513 00:22:35.498150 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:22:35.498251 kubelet[2539]: E0513 00:22:35.498210 2539 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:22:35.498535 kubelet[2539]: E0513 00:22:35.498349 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:22:35.498535 kubelet[2539]: W0513 00:22:35.498360 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:22:35.498535 kubelet[2539]: E0513 00:22:35.498439 2539 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:22:35.499241 kubelet[2539]: E0513 00:22:35.498861 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:22:35.499241 kubelet[2539]: W0513 00:22:35.498878 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:22:35.499241 kubelet[2539]: E0513 00:22:35.498937 2539 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:22:35.499445 kubelet[2539]: E0513 00:22:35.499429 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:22:35.499674 kubelet[2539]: W0513 00:22:35.499544 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:22:35.499674 kubelet[2539]: E0513 00:22:35.499622 2539 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:22:35.501059 kubelet[2539]: E0513 00:22:35.500858 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:22:35.501444 kubelet[2539]: W0513 00:22:35.501329 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:22:35.501444 kubelet[2539]: E0513 00:22:35.501417 2539 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:22:35.501799 kubelet[2539]: E0513 00:22:35.501634 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:22:35.501799 kubelet[2539]: W0513 00:22:35.501648 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:22:35.501799 kubelet[2539]: E0513 00:22:35.501681 2539 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:22:35.502158 kubelet[2539]: E0513 00:22:35.502021 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:22:35.502158 kubelet[2539]: W0513 00:22:35.502038 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:22:35.502158 kubelet[2539]: E0513 00:22:35.502127 2539 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:22:35.502489 kubelet[2539]: E0513 00:22:35.502390 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:22:35.502489 kubelet[2539]: W0513 00:22:35.502404 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:22:35.502489 kubelet[2539]: E0513 00:22:35.502454 2539 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:22:35.502661 kubelet[2539]: E0513 00:22:35.502646 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:22:35.502786 kubelet[2539]: W0513 00:22:35.502762 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:22:35.502891 kubelet[2539]: E0513 00:22:35.502836 2539 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:22:35.503479 kubelet[2539]: E0513 00:22:35.503377 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:22:35.503479 kubelet[2539]: W0513 00:22:35.503395 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:22:35.503479 kubelet[2539]: E0513 00:22:35.503456 2539 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:22:35.503831 kubelet[2539]: E0513 00:22:35.503728 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:22:35.503831 kubelet[2539]: W0513 00:22:35.503741 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:22:35.503831 kubelet[2539]: E0513 00:22:35.503826 2539 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:22:35.504179 kubelet[2539]: E0513 00:22:35.504085 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:22:35.504179 kubelet[2539]: W0513 00:22:35.504099 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:22:35.504179 kubelet[2539]: E0513 00:22:35.504172 2539 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:22:35.504578 kubelet[2539]: E0513 00:22:35.504459 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:22:35.504578 kubelet[2539]: W0513 00:22:35.504472 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:22:35.504578 kubelet[2539]: E0513 00:22:35.504544 2539 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:22:35.505043 kubelet[2539]: E0513 00:22:35.504886 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:22:35.505043 kubelet[2539]: W0513 00:22:35.504903 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:22:35.505043 kubelet[2539]: E0513 00:22:35.504917 2539 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:22:35.515982 kubelet[2539]: E0513 00:22:35.515960 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:22:35.516227 kubelet[2539]: W0513 00:22:35.516109 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:22:35.516227 kubelet[2539]: E0513 00:22:35.516134 2539 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:22:35.516871 containerd[1446]: time="2025-05-13T00:22:35.516835698Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-qpwwj,Uid:72b68459-1919-4966-b93b-32e458d8cf96,Namespace:calico-system,Attempt:0,} returns sandbox id \"468773c23983433f4b9050eb1fd4d957934787b0734b7d473c4f629680b44838\"" May 13 00:22:35.519122 kubelet[2539]: E0513 00:22:35.518688 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:22:36.805915 containerd[1446]: time="2025-05-13T00:22:36.805873310Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:22:36.807209 containerd[1446]: time="2025-05-13T00:22:36.807182795Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.3: active requests=0, bytes read=28370571" May 13 00:22:36.807961 containerd[1446]: time="2025-05-13T00:22:36.807934285Z" level=info msg="ImageCreate event name:\"sha256:26e730979a07ea7452715da6ac48076016018bc982c06ebd32d5e095f42d3d54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:22:36.809996 containerd[1446]: time="2025-05-13T00:22:36.809722464Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:22:36.810766 containerd[1446]: time="2025-05-13T00:22:36.810733187Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.3\" with image id \"sha256:26e730979a07ea7452715da6ac48076016018bc982c06ebd32d5e095f42d3d54\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\", size \"29739745\" in 1.347163483s" May 13 00:22:36.810820 containerd[1446]: time="2025-05-13T00:22:36.810764475Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\" returns image reference \"sha256:26e730979a07ea7452715da6ac48076016018bc982c06ebd32d5e095f42d3d54\"" May 13 00:22:36.812375 containerd[1446]: time="2025-05-13T00:22:36.812173029Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\"" May 13 00:22:36.826885 containerd[1446]: time="2025-05-13T00:22:36.826853008Z" level=info msg="CreateContainer within sandbox \"80f95fe3490b4762c83e1f9a25d88d981575fb1075c0f1cf162292e2034dff08\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" May 13 00:22:36.850406 containerd[1446]: time="2025-05-13T00:22:36.850283550Z" level=info msg="CreateContainer within sandbox \"80f95fe3490b4762c83e1f9a25d88d981575fb1075c0f1cf162292e2034dff08\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"fb1d0127c5af997b02f1d385763c718e5ef08fb44bab85864ee9438d1f4bd7af\"" May 13 00:22:36.851611 containerd[1446]: time="2025-05-13T00:22:36.851052605Z" level=info msg="StartContainer for \"fb1d0127c5af997b02f1d385763c718e5ef08fb44bab85864ee9438d1f4bd7af\"" May 13 00:22:36.902895 systemd[1]: Started cri-containerd-fb1d0127c5af997b02f1d385763c718e5ef08fb44bab85864ee9438d1f4bd7af.scope - libcontainer container fb1d0127c5af997b02f1d385763c718e5ef08fb44bab85864ee9438d1f4bd7af. May 13 00:22:36.938528 containerd[1446]: time="2025-05-13T00:22:36.938477737Z" level=info msg="StartContainer for \"fb1d0127c5af997b02f1d385763c718e5ef08fb44bab85864ee9438d1f4bd7af\" returns successfully" May 13 00:22:36.969236 kubelet[2539]: E0513 00:22:36.968723 2539 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qpdwm" podUID="8de2152a-6cd4-4599-a610-aac788d746cf" May 13 00:22:37.044616 kubelet[2539]: E0513 00:22:37.044585 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:22:37.058139 kubelet[2539]: I0513 00:22:37.058002 2539 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5fb48f5bf9-mjgw4" podStartSLOduration=0.70886512 podStartE2EDuration="2.057986404s" podCreationTimestamp="2025-05-13 00:22:35 +0000 UTC" firstStartedPulling="2025-05-13 00:22:35.462535361 +0000 UTC m=+20.574666140" lastFinishedPulling="2025-05-13 00:22:36.811656645 +0000 UTC m=+21.923787424" observedRunningTime="2025-05-13 00:22:37.057682043 +0000 UTC m=+22.169812902" watchObservedRunningTime="2025-05-13 00:22:37.057986404 +0000 UTC m=+22.170117183" May 13 00:22:37.104086 kubelet[2539]: E0513 00:22:37.104042 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:22:37.104086 kubelet[2539]: W0513 00:22:37.104069 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:22:37.104086 kubelet[2539]: E0513 00:22:37.104091 2539 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:22:37.104348 kubelet[2539]: E0513 00:22:37.104322 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:22:37.104348 kubelet[2539]: W0513 00:22:37.104338 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:22:37.104413 kubelet[2539]: E0513 00:22:37.104351 2539 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:22:37.104540 kubelet[2539]: E0513 00:22:37.104520 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:22:37.104540 kubelet[2539]: W0513 00:22:37.104534 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:22:37.104592 kubelet[2539]: E0513 00:22:37.104543 2539 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:22:37.104724 kubelet[2539]: E0513 00:22:37.104712 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:22:37.104758 kubelet[2539]: W0513 00:22:37.104724 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:22:37.104758 kubelet[2539]: E0513 00:22:37.104732 2539 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:22:37.104910 kubelet[2539]: E0513 00:22:37.104890 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:22:37.104910 kubelet[2539]: W0513 00:22:37.104903 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:22:37.104910 kubelet[2539]: E0513 00:22:37.104910 2539 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:22:37.105062 kubelet[2539]: E0513 00:22:37.105051 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:22:37.105086 kubelet[2539]: W0513 00:22:37.105062 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:22:37.105086 kubelet[2539]: E0513 00:22:37.105071 2539 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:22:37.105284 kubelet[2539]: E0513 00:22:37.105271 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:22:37.105284 kubelet[2539]: W0513 00:22:37.105283 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:22:37.105336 kubelet[2539]: E0513 00:22:37.105291 2539 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:22:37.105485 kubelet[2539]: E0513 00:22:37.105474 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:22:37.105485 kubelet[2539]: W0513 00:22:37.105485 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:22:37.105538 kubelet[2539]: E0513 00:22:37.105493 2539 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:22:37.105878 kubelet[2539]: E0513 00:22:37.105847 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:22:37.105928 kubelet[2539]: W0513 00:22:37.105861 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:22:37.105928 kubelet[2539]: E0513 00:22:37.105900 2539 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:22:37.106094 kubelet[2539]: E0513 00:22:37.106081 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:22:37.106094 kubelet[2539]: W0513 00:22:37.106093 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:22:37.106143 kubelet[2539]: E0513 00:22:37.106100 2539 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:22:37.106262 kubelet[2539]: E0513 00:22:37.106250 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:22:37.106262 kubelet[2539]: W0513 00:22:37.106261 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:22:37.106310 kubelet[2539]: E0513 00:22:37.106269 2539 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:22:37.106441 kubelet[2539]: E0513 00:22:37.106429 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:22:37.106468 kubelet[2539]: W0513 00:22:37.106441 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:22:37.106468 kubelet[2539]: E0513 00:22:37.106449 2539 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:22:37.106637 kubelet[2539]: E0513 00:22:37.106626 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:22:37.106671 kubelet[2539]: W0513 00:22:37.106637 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:22:37.106671 kubelet[2539]: E0513 00:22:37.106644 2539 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:22:37.106854 kubelet[2539]: E0513 00:22:37.106842 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:22:37.106854 kubelet[2539]: W0513 00:22:37.106852 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:22:37.106905 kubelet[2539]: E0513 00:22:37.106860 2539 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:22:37.107033 kubelet[2539]: E0513 00:22:37.107022 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:22:37.107033 kubelet[2539]: W0513 00:22:37.107031 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:22:37.107083 kubelet[2539]: E0513 00:22:37.107039 2539 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:22:37.107324 kubelet[2539]: E0513 00:22:37.107307 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:22:37.107324 kubelet[2539]: W0513 00:22:37.107320 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:22:37.107395 kubelet[2539]: E0513 00:22:37.107328 2539 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:22:37.107541 kubelet[2539]: E0513 00:22:37.107530 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:22:37.107541 kubelet[2539]: W0513 00:22:37.107541 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:22:37.107598 kubelet[2539]: E0513 00:22:37.107556 2539 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:22:37.108735 kubelet[2539]: E0513 00:22:37.107764 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:22:37.108787 kubelet[2539]: W0513 00:22:37.108750 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:22:37.108787 kubelet[2539]: E0513 00:22:37.108779 2539 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:22:37.109037 kubelet[2539]: E0513 00:22:37.109021 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:22:37.109037 kubelet[2539]: W0513 00:22:37.109035 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:22:37.109105 kubelet[2539]: E0513 00:22:37.109089 2539 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:22:37.109257 kubelet[2539]: E0513 00:22:37.109242 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:22:37.109257 kubelet[2539]: W0513 00:22:37.109254 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:22:37.109375 kubelet[2539]: E0513 00:22:37.109338 2539 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:22:37.109423 kubelet[2539]: E0513 00:22:37.109416 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:22:37.109500 kubelet[2539]: W0513 00:22:37.109425 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:22:37.109500 kubelet[2539]: E0513 00:22:37.109454 2539 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:22:37.109655 kubelet[2539]: E0513 00:22:37.109637 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:22:37.109734 kubelet[2539]: W0513 00:22:37.109655 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:22:37.109759 kubelet[2539]: E0513 00:22:37.109750 2539 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:22:37.110024 kubelet[2539]: E0513 00:22:37.109960 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:22:37.110024 kubelet[2539]: W0513 00:22:37.110024 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:22:37.110091 kubelet[2539]: E0513 00:22:37.110040 2539 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:22:37.110341 kubelet[2539]: E0513 00:22:37.110321 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:22:37.110379 kubelet[2539]: W0513 00:22:37.110338 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:22:37.110379 kubelet[2539]: E0513 00:22:37.110369 2539 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:22:37.110761 kubelet[2539]: E0513 00:22:37.110746 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:22:37.110761 kubelet[2539]: W0513 00:22:37.110759 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:22:37.110834 kubelet[2539]: E0513 00:22:37.110775 2539 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:22:37.111078 kubelet[2539]: E0513 00:22:37.111065 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:22:37.111078 kubelet[2539]: W0513 00:22:37.111077 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:22:37.111140 kubelet[2539]: E0513 00:22:37.111126 2539 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:22:37.111314 kubelet[2539]: E0513 00:22:37.111301 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:22:37.111348 kubelet[2539]: W0513 00:22:37.111317 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:22:37.111424 kubelet[2539]: E0513 00:22:37.111395 2539 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:22:37.111507 kubelet[2539]: E0513 00:22:37.111492 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:22:37.111507 kubelet[2539]: W0513 00:22:37.111506 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:22:37.111565 kubelet[2539]: E0513 00:22:37.111521 2539 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:22:37.111828 kubelet[2539]: E0513 00:22:37.111809 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:22:37.111828 kubelet[2539]: W0513 00:22:37.111827 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:22:37.111907 kubelet[2539]: E0513 00:22:37.111858 2539 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:22:37.112280 kubelet[2539]: E0513 00:22:37.112146 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:22:37.112280 kubelet[2539]: W0513 00:22:37.112161 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:22:37.112280 kubelet[2539]: E0513 00:22:37.112173 2539 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:22:37.112456 kubelet[2539]: E0513 00:22:37.112442 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:22:37.112518 kubelet[2539]: W0513 00:22:37.112506 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:22:37.112583 kubelet[2539]: E0513 00:22:37.112571 2539 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:22:37.112865 kubelet[2539]: E0513 00:22:37.112818 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:22:37.112865 kubelet[2539]: W0513 00:22:37.112851 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:22:37.112960 kubelet[2539]: E0513 00:22:37.112940 2539 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:22:37.113206 kubelet[2539]: E0513 00:22:37.113179 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:22:37.113206 kubelet[2539]: W0513 00:22:37.113202 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:22:37.113266 kubelet[2539]: E0513 00:22:37.113213 2539 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:22:37.968518 containerd[1446]: time="2025-05-13T00:22:37.968468882Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:22:37.969515 containerd[1446]: time="2025-05-13T00:22:37.969011187Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3: active requests=0, bytes read=5122903" May 13 00:22:37.970699 containerd[1446]: time="2025-05-13T00:22:37.969946556Z" level=info msg="ImageCreate event name:\"sha256:dd8e710a588cc6f5834c4d84f7e12458efae593d3dfe527ca9e757c89239ecb8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:22:37.971985 containerd[1446]: time="2025-05-13T00:22:37.971952651Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:22:37.973492 containerd[1446]: time="2025-05-13T00:22:37.973348183Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" with image id \"sha256:dd8e710a588cc6f5834c4d84f7e12458efae593d3dfe527ca9e757c89239ecb8\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\", size \"6492045\" in 1.161140664s" May 13 00:22:37.973492 containerd[1446]: time="2025-05-13T00:22:37.973385873Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" returns image reference \"sha256:dd8e710a588cc6f5834c4d84f7e12458efae593d3dfe527ca9e757c89239ecb8\"" May 13 00:22:37.975304 containerd[1446]: time="2025-05-13T00:22:37.975198556Z" level=info msg="CreateContainer within sandbox \"468773c23983433f4b9050eb1fd4d957934787b0734b7d473c4f629680b44838\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 13 00:22:37.988952 containerd[1446]: time="2025-05-13T00:22:37.988895687Z" level=info msg="CreateContainer within sandbox \"468773c23983433f4b9050eb1fd4d957934787b0734b7d473c4f629680b44838\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"7e6afae7c81bf912704a595d8fb48c1888b7a2b5c4234a3fb61c6b78e99515ac\"" May 13 00:22:37.989476 containerd[1446]: time="2025-05-13T00:22:37.989439512Z" level=info msg="StartContainer for \"7e6afae7c81bf912704a595d8fb48c1888b7a2b5c4234a3fb61c6b78e99515ac\"" May 13 00:22:38.025844 systemd[1]: Started cri-containerd-7e6afae7c81bf912704a595d8fb48c1888b7a2b5c4234a3fb61c6b78e99515ac.scope - libcontainer container 7e6afae7c81bf912704a595d8fb48c1888b7a2b5c4234a3fb61c6b78e99515ac. May 13 00:22:38.052092 kubelet[2539]: I0513 00:22:38.052045 2539 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 13 00:22:38.053816 kubelet[2539]: E0513 00:22:38.052719 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:22:38.065714 containerd[1446]: time="2025-05-13T00:22:38.063793390Z" level=info msg="StartContainer for \"7e6afae7c81bf912704a595d8fb48c1888b7a2b5c4234a3fb61c6b78e99515ac\" returns successfully" May 13 00:22:38.084236 systemd[1]: cri-containerd-7e6afae7c81bf912704a595d8fb48c1888b7a2b5c4234a3fb61c6b78e99515ac.scope: Deactivated successfully. May 13 00:22:38.289614 containerd[1446]: time="2025-05-13T00:22:38.289545635Z" level=info msg="shim disconnected" id=7e6afae7c81bf912704a595d8fb48c1888b7a2b5c4234a3fb61c6b78e99515ac namespace=k8s.io May 13 00:22:38.289614 containerd[1446]: time="2025-05-13T00:22:38.289610452Z" level=warning msg="cleaning up after shim disconnected" id=7e6afae7c81bf912704a595d8fb48c1888b7a2b5c4234a3fb61c6b78e99515ac namespace=k8s.io May 13 00:22:38.289614 containerd[1446]: time="2025-05-13T00:22:38.289618854Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 13 00:22:38.295531 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7e6afae7c81bf912704a595d8fb48c1888b7a2b5c4234a3fb61c6b78e99515ac-rootfs.mount: Deactivated successfully. May 13 00:22:38.966991 kubelet[2539]: E0513 00:22:38.966896 2539 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qpdwm" podUID="8de2152a-6cd4-4599-a610-aac788d746cf" May 13 00:22:39.052289 kubelet[2539]: E0513 00:22:39.052225 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:22:39.053259 containerd[1446]: time="2025-05-13T00:22:39.053220068Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\"" May 13 00:22:40.968009 kubelet[2539]: E0513 00:22:40.967962 2539 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qpdwm" podUID="8de2152a-6cd4-4599-a610-aac788d746cf" May 13 00:22:42.966884 kubelet[2539]: E0513 00:22:42.966836 2539 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qpdwm" podUID="8de2152a-6cd4-4599-a610-aac788d746cf" May 13 00:22:43.322265 containerd[1446]: time="2025-05-13T00:22:43.322215381Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:22:43.322856 containerd[1446]: time="2025-05-13T00:22:43.322822826Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.3: active requests=0, bytes read=91256270" May 13 00:22:43.323475 containerd[1446]: time="2025-05-13T00:22:43.323445594Z" level=info msg="ImageCreate event name:\"sha256:add6372545fb406bb017769f222d84c50549ce13e3b19f1fbaee3d8a4aaef627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:22:43.325885 containerd[1446]: time="2025-05-13T00:22:43.325844487Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:22:43.326603 containerd[1446]: time="2025-05-13T00:22:43.326566955Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.3\" with image id \"sha256:add6372545fb406bb017769f222d84c50549ce13e3b19f1fbaee3d8a4aaef627\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\", size \"92625452\" in 4.273307998s" May 13 00:22:43.326655 containerd[1446]: time="2025-05-13T00:22:43.326600042Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\" returns image reference \"sha256:add6372545fb406bb017769f222d84c50549ce13e3b19f1fbaee3d8a4aaef627\"" May 13 00:22:43.329446 containerd[1446]: time="2025-05-13T00:22:43.329413140Z" level=info msg="CreateContainer within sandbox \"468773c23983433f4b9050eb1fd4d957934787b0734b7d473c4f629680b44838\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 13 00:22:43.341195 containerd[1446]: time="2025-05-13T00:22:43.341087017Z" level=info msg="CreateContainer within sandbox \"468773c23983433f4b9050eb1fd4d957934787b0734b7d473c4f629680b44838\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"0bed7cd264242b018deeaa5ad7bf52a71c6cb3b08ef8a4938662027f91ce18e1\"" May 13 00:22:43.342917 containerd[1446]: time="2025-05-13T00:22:43.341560195Z" level=info msg="StartContainer for \"0bed7cd264242b018deeaa5ad7bf52a71c6cb3b08ef8a4938662027f91ce18e1\"" May 13 00:22:43.367635 systemd[1]: run-containerd-runc-k8s.io-0bed7cd264242b018deeaa5ad7bf52a71c6cb3b08ef8a4938662027f91ce18e1-runc.pjNI4t.mount: Deactivated successfully. May 13 00:22:43.377880 systemd[1]: Started cri-containerd-0bed7cd264242b018deeaa5ad7bf52a71c6cb3b08ef8a4938662027f91ce18e1.scope - libcontainer container 0bed7cd264242b018deeaa5ad7bf52a71c6cb3b08ef8a4938662027f91ce18e1. May 13 00:22:43.490077 containerd[1446]: time="2025-05-13T00:22:43.490028650Z" level=info msg="StartContainer for \"0bed7cd264242b018deeaa5ad7bf52a71c6cb3b08ef8a4938662027f91ce18e1\" returns successfully" May 13 00:22:44.070692 kubelet[2539]: E0513 00:22:44.070638 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:22:44.120723 containerd[1446]: time="2025-05-13T00:22:44.120643848Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 13 00:22:44.123902 systemd[1]: cri-containerd-0bed7cd264242b018deeaa5ad7bf52a71c6cb3b08ef8a4938662027f91ce18e1.scope: Deactivated successfully. May 13 00:22:44.147406 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0bed7cd264242b018deeaa5ad7bf52a71c6cb3b08ef8a4938662027f91ce18e1-rootfs.mount: Deactivated successfully. May 13 00:22:44.158633 containerd[1446]: time="2025-05-13T00:22:44.158205260Z" level=info msg="shim disconnected" id=0bed7cd264242b018deeaa5ad7bf52a71c6cb3b08ef8a4938662027f91ce18e1 namespace=k8s.io May 13 00:22:44.158633 containerd[1446]: time="2025-05-13T00:22:44.158279874Z" level=warning msg="cleaning up after shim disconnected" id=0bed7cd264242b018deeaa5ad7bf52a71c6cb3b08ef8a4938662027f91ce18e1 namespace=k8s.io May 13 00:22:44.158633 containerd[1446]: time="2025-05-13T00:22:44.158290316Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 13 00:22:44.166945 kubelet[2539]: I0513 00:22:44.166219 2539 kubelet_node_status.go:497] "Fast updating node status as it just became ready" May 13 00:22:44.188104 kubelet[2539]: I0513 00:22:44.188066 2539 topology_manager.go:215] "Topology Admit Handler" podUID="5937a833-2917-44dc-be70-b888b2f1c194" podNamespace="kube-system" podName="coredns-7db6d8ff4d-kpx49" May 13 00:22:44.194973 kubelet[2539]: I0513 00:22:44.194933 2539 topology_manager.go:215] "Topology Admit Handler" podUID="333d75f2-2d65-4963-bba9-b7a1ce798de8" podNamespace="calico-system" podName="calico-kube-controllers-69855fccf-cpn7n" May 13 00:22:44.195099 kubelet[2539]: I0513 00:22:44.195082 2539 topology_manager.go:215] "Topology Admit Handler" podUID="a4d0cfcc-1589-4e5f-bdf4-ebe5ae5fb30f" podNamespace="kube-system" podName="coredns-7db6d8ff4d-xcx4r" May 13 00:22:44.195203 kubelet[2539]: I0513 00:22:44.195185 2539 topology_manager.go:215] "Topology Admit Handler" podUID="d1cea400-e767-4428-8f90-26704f1d5213" podNamespace="calico-apiserver" podName="calico-apiserver-6894c4f4db-vq79h" May 13 00:22:44.195292 kubelet[2539]: I0513 00:22:44.195277 2539 topology_manager.go:215] "Topology Admit Handler" podUID="775b536a-9cd7-44ec-b10b-8740a8d0f7ab" podNamespace="calico-apiserver" podName="calico-apiserver-6894c4f4db-g2hb4" May 13 00:22:44.211121 systemd[1]: Created slice kubepods-besteffort-pod333d75f2_2d65_4963_bba9_b7a1ce798de8.slice - libcontainer container kubepods-besteffort-pod333d75f2_2d65_4963_bba9_b7a1ce798de8.slice. May 13 00:22:44.217557 systemd[1]: Created slice kubepods-besteffort-podd1cea400_e767_4428_8f90_26704f1d5213.slice - libcontainer container kubepods-besteffort-podd1cea400_e767_4428_8f90_26704f1d5213.slice. May 13 00:22:44.224903 systemd[1]: Created slice kubepods-besteffort-pod775b536a_9cd7_44ec_b10b_8740a8d0f7ab.slice - libcontainer container kubepods-besteffort-pod775b536a_9cd7_44ec_b10b_8740a8d0f7ab.slice. May 13 00:22:44.231291 systemd[1]: Created slice kubepods-burstable-poda4d0cfcc_1589_4e5f_bdf4_ebe5ae5fb30f.slice - libcontainer container kubepods-burstable-poda4d0cfcc_1589_4e5f_bdf4_ebe5ae5fb30f.slice. May 13 00:22:44.237734 systemd[1]: Created slice kubepods-burstable-pod5937a833_2917_44dc_be70_b888b2f1c194.slice - libcontainer container kubepods-burstable-pod5937a833_2917_44dc_be70_b888b2f1c194.slice. May 13 00:22:44.355663 kubelet[2539]: I0513 00:22:44.355541 2539 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sfrsh\" (UniqueName: \"kubernetes.io/projected/d1cea400-e767-4428-8f90-26704f1d5213-kube-api-access-sfrsh\") pod \"calico-apiserver-6894c4f4db-vq79h\" (UID: \"d1cea400-e767-4428-8f90-26704f1d5213\") " pod="calico-apiserver/calico-apiserver-6894c4f4db-vq79h" May 13 00:22:44.355663 kubelet[2539]: I0513 00:22:44.355596 2539 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/775b536a-9cd7-44ec-b10b-8740a8d0f7ab-calico-apiserver-certs\") pod \"calico-apiserver-6894c4f4db-g2hb4\" (UID: \"775b536a-9cd7-44ec-b10b-8740a8d0f7ab\") " pod="calico-apiserver/calico-apiserver-6894c4f4db-g2hb4" May 13 00:22:44.355663 kubelet[2539]: I0513 00:22:44.355621 2539 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/333d75f2-2d65-4963-bba9-b7a1ce798de8-tigera-ca-bundle\") pod \"calico-kube-controllers-69855fccf-cpn7n\" (UID: \"333d75f2-2d65-4963-bba9-b7a1ce798de8\") " pod="calico-system/calico-kube-controllers-69855fccf-cpn7n" May 13 00:22:44.355663 kubelet[2539]: I0513 00:22:44.355638 2539 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a4d0cfcc-1589-4e5f-bdf4-ebe5ae5fb30f-config-volume\") pod \"coredns-7db6d8ff4d-xcx4r\" (UID: \"a4d0cfcc-1589-4e5f-bdf4-ebe5ae5fb30f\") " pod="kube-system/coredns-7db6d8ff4d-xcx4r" May 13 00:22:44.355881 kubelet[2539]: I0513 00:22:44.355687 2539 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7k4f2\" (UniqueName: \"kubernetes.io/projected/a4d0cfcc-1589-4e5f-bdf4-ebe5ae5fb30f-kube-api-access-7k4f2\") pod \"coredns-7db6d8ff4d-xcx4r\" (UID: \"a4d0cfcc-1589-4e5f-bdf4-ebe5ae5fb30f\") " pod="kube-system/coredns-7db6d8ff4d-xcx4r" May 13 00:22:44.355881 kubelet[2539]: I0513 00:22:44.355707 2539 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/d1cea400-e767-4428-8f90-26704f1d5213-calico-apiserver-certs\") pod \"calico-apiserver-6894c4f4db-vq79h\" (UID: \"d1cea400-e767-4428-8f90-26704f1d5213\") " pod="calico-apiserver/calico-apiserver-6894c4f4db-vq79h" May 13 00:22:44.355881 kubelet[2539]: I0513 00:22:44.355725 2539 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5937a833-2917-44dc-be70-b888b2f1c194-config-volume\") pod \"coredns-7db6d8ff4d-kpx49\" (UID: \"5937a833-2917-44dc-be70-b888b2f1c194\") " pod="kube-system/coredns-7db6d8ff4d-kpx49" May 13 00:22:44.355881 kubelet[2539]: I0513 00:22:44.355741 2539 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2n9pb\" (UniqueName: \"kubernetes.io/projected/775b536a-9cd7-44ec-b10b-8740a8d0f7ab-kube-api-access-2n9pb\") pod \"calico-apiserver-6894c4f4db-g2hb4\" (UID: \"775b536a-9cd7-44ec-b10b-8740a8d0f7ab\") " pod="calico-apiserver/calico-apiserver-6894c4f4db-g2hb4" May 13 00:22:44.355881 kubelet[2539]: I0513 00:22:44.355759 2539 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chwhg\" (UniqueName: \"kubernetes.io/projected/333d75f2-2d65-4963-bba9-b7a1ce798de8-kube-api-access-chwhg\") pod \"calico-kube-controllers-69855fccf-cpn7n\" (UID: \"333d75f2-2d65-4963-bba9-b7a1ce798de8\") " pod="calico-system/calico-kube-controllers-69855fccf-cpn7n" May 13 00:22:44.356005 kubelet[2539]: I0513 00:22:44.355778 2539 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tlwj9\" (UniqueName: \"kubernetes.io/projected/5937a833-2917-44dc-be70-b888b2f1c194-kube-api-access-tlwj9\") pod \"coredns-7db6d8ff4d-kpx49\" (UID: \"5937a833-2917-44dc-be70-b888b2f1c194\") " pod="kube-system/coredns-7db6d8ff4d-kpx49" May 13 00:22:44.516642 containerd[1446]: time="2025-05-13T00:22:44.516390496Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-69855fccf-cpn7n,Uid:333d75f2-2d65-4963-bba9-b7a1ce798de8,Namespace:calico-system,Attempt:0,}" May 13 00:22:44.521857 containerd[1446]: time="2025-05-13T00:22:44.521781520Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6894c4f4db-vq79h,Uid:d1cea400-e767-4428-8f90-26704f1d5213,Namespace:calico-apiserver,Attempt:0,}" May 13 00:22:44.528698 containerd[1446]: time="2025-05-13T00:22:44.528584062Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6894c4f4db-g2hb4,Uid:775b536a-9cd7-44ec-b10b-8740a8d0f7ab,Namespace:calico-apiserver,Attempt:0,}" May 13 00:22:44.535992 kubelet[2539]: E0513 00:22:44.535950 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:22:44.538895 containerd[1446]: time="2025-05-13T00:22:44.536848333Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-xcx4r,Uid:a4d0cfcc-1589-4e5f-bdf4-ebe5ae5fb30f,Namespace:kube-system,Attempt:0,}" May 13 00:22:44.543325 kubelet[2539]: E0513 00:22:44.543291 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:22:44.545042 containerd[1446]: time="2025-05-13T00:22:44.544992180Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-kpx49,Uid:5937a833-2917-44dc-be70-b888b2f1c194,Namespace:kube-system,Attempt:0,}" May 13 00:22:44.922890 containerd[1446]: time="2025-05-13T00:22:44.922825053Z" level=error msg="Failed to destroy network for sandbox \"0385badbc46d7545171daec8abaf6e071928490fc792fb9972d614929747c54b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:22:44.923033 containerd[1446]: time="2025-05-13T00:22:44.922902388Z" level=error msg="Failed to destroy network for sandbox \"bfbcc49e1a3b0f71bf0e010cb7c9fe3095889e7036e2e864e58763235f929478\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:22:44.923272 containerd[1446]: time="2025-05-13T00:22:44.923246896Z" level=error msg="encountered an error cleaning up failed sandbox \"bfbcc49e1a3b0f71bf0e010cb7c9fe3095889e7036e2e864e58763235f929478\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:22:44.923324 containerd[1446]: time="2025-05-13T00:22:44.923298506Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-69855fccf-cpn7n,Uid:333d75f2-2d65-4963-bba9-b7a1ce798de8,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"bfbcc49e1a3b0f71bf0e010cb7c9fe3095889e7036e2e864e58763235f929478\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:22:44.923851 containerd[1446]: time="2025-05-13T00:22:44.923820929Z" level=error msg="encountered an error cleaning up failed sandbox \"0385badbc46d7545171daec8abaf6e071928490fc792fb9972d614929747c54b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:22:44.923895 containerd[1446]: time="2025-05-13T00:22:44.923873740Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-xcx4r,Uid:a4d0cfcc-1589-4e5f-bdf4-ebe5ae5fb30f,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0385badbc46d7545171daec8abaf6e071928490fc792fb9972d614929747c54b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:22:44.924485 containerd[1446]: time="2025-05-13T00:22:44.924391402Z" level=error msg="Failed to destroy network for sandbox \"f3151aae7dd241cfa9cb458ba5dfd9e9b570461ce2c93ea539aa3d30677df77c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:22:44.924634 containerd[1446]: time="2025-05-13T00:22:44.924602884Z" level=error msg="Failed to destroy network for sandbox \"87e79ddd483e5624ff50ef59ae827cf47ccadb3749c204e75b7230f340a69d07\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:22:44.924856 containerd[1446]: time="2025-05-13T00:22:44.924828848Z" level=error msg="encountered an error cleaning up failed sandbox \"f3151aae7dd241cfa9cb458ba5dfd9e9b570461ce2c93ea539aa3d30677df77c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:22:44.924979 containerd[1446]: time="2025-05-13T00:22:44.924942711Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6894c4f4db-g2hb4,Uid:775b536a-9cd7-44ec-b10b-8740a8d0f7ab,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f3151aae7dd241cfa9cb458ba5dfd9e9b570461ce2c93ea539aa3d30677df77c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:22:44.925679 containerd[1446]: time="2025-05-13T00:22:44.925620845Z" level=error msg="encountered an error cleaning up failed sandbox \"87e79ddd483e5624ff50ef59ae827cf47ccadb3749c204e75b7230f340a69d07\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:22:44.925724 containerd[1446]: time="2025-05-13T00:22:44.925695619Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6894c4f4db-vq79h,Uid:d1cea400-e767-4428-8f90-26704f1d5213,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"87e79ddd483e5624ff50ef59ae827cf47ccadb3749c204e75b7230f340a69d07\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:22:44.926215 kubelet[2539]: E0513 00:22:44.926150 2539 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bfbcc49e1a3b0f71bf0e010cb7c9fe3095889e7036e2e864e58763235f929478\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:22:44.926285 kubelet[2539]: E0513 00:22:44.926266 2539 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bfbcc49e1a3b0f71bf0e010cb7c9fe3095889e7036e2e864e58763235f929478\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-69855fccf-cpn7n" May 13 00:22:44.926327 kubelet[2539]: E0513 00:22:44.926297 2539 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bfbcc49e1a3b0f71bf0e010cb7c9fe3095889e7036e2e864e58763235f929478\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-69855fccf-cpn7n" May 13 00:22:44.926367 kubelet[2539]: E0513 00:22:44.926149 2539 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f3151aae7dd241cfa9cb458ba5dfd9e9b570461ce2c93ea539aa3d30677df77c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:22:44.926407 kubelet[2539]: E0513 00:22:44.926352 2539 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-69855fccf-cpn7n_calico-system(333d75f2-2d65-4963-bba9-b7a1ce798de8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-69855fccf-cpn7n_calico-system(333d75f2-2d65-4963-bba9-b7a1ce798de8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bfbcc49e1a3b0f71bf0e010cb7c9fe3095889e7036e2e864e58763235f929478\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-69855fccf-cpn7n" podUID="333d75f2-2d65-4963-bba9-b7a1ce798de8" May 13 00:22:44.926407 kubelet[2539]: E0513 00:22:44.926393 2539 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f3151aae7dd241cfa9cb458ba5dfd9e9b570461ce2c93ea539aa3d30677df77c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6894c4f4db-g2hb4" May 13 00:22:44.926476 kubelet[2539]: E0513 00:22:44.926414 2539 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f3151aae7dd241cfa9cb458ba5dfd9e9b570461ce2c93ea539aa3d30677df77c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6894c4f4db-g2hb4" May 13 00:22:44.926476 kubelet[2539]: E0513 00:22:44.926446 2539 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6894c4f4db-g2hb4_calico-apiserver(775b536a-9cd7-44ec-b10b-8740a8d0f7ab)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6894c4f4db-g2hb4_calico-apiserver(775b536a-9cd7-44ec-b10b-8740a8d0f7ab)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f3151aae7dd241cfa9cb458ba5dfd9e9b570461ce2c93ea539aa3d30677df77c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6894c4f4db-g2hb4" podUID="775b536a-9cd7-44ec-b10b-8740a8d0f7ab" May 13 00:22:44.928150 kubelet[2539]: E0513 00:22:44.928120 2539 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"87e79ddd483e5624ff50ef59ae827cf47ccadb3749c204e75b7230f340a69d07\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:22:44.928219 kubelet[2539]: E0513 00:22:44.928164 2539 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"87e79ddd483e5624ff50ef59ae827cf47ccadb3749c204e75b7230f340a69d07\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6894c4f4db-vq79h" May 13 00:22:44.928219 kubelet[2539]: E0513 00:22:44.928196 2539 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"87e79ddd483e5624ff50ef59ae827cf47ccadb3749c204e75b7230f340a69d07\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6894c4f4db-vq79h" May 13 00:22:44.928279 kubelet[2539]: E0513 00:22:44.928235 2539 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6894c4f4db-vq79h_calico-apiserver(d1cea400-e767-4428-8f90-26704f1d5213)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6894c4f4db-vq79h_calico-apiserver(d1cea400-e767-4428-8f90-26704f1d5213)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"87e79ddd483e5624ff50ef59ae827cf47ccadb3749c204e75b7230f340a69d07\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6894c4f4db-vq79h" podUID="d1cea400-e767-4428-8f90-26704f1d5213" May 13 00:22:44.928328 kubelet[2539]: E0513 00:22:44.926149 2539 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0385badbc46d7545171daec8abaf6e071928490fc792fb9972d614929747c54b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:22:44.928328 kubelet[2539]: E0513 00:22:44.928302 2539 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0385badbc46d7545171daec8abaf6e071928490fc792fb9972d614929747c54b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-xcx4r" May 13 00:22:44.928328 kubelet[2539]: E0513 00:22:44.928318 2539 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0385badbc46d7545171daec8abaf6e071928490fc792fb9972d614929747c54b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-xcx4r" May 13 00:22:44.928469 kubelet[2539]: E0513 00:22:44.928442 2539 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-xcx4r_kube-system(a4d0cfcc-1589-4e5f-bdf4-ebe5ae5fb30f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-xcx4r_kube-system(a4d0cfcc-1589-4e5f-bdf4-ebe5ae5fb30f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0385badbc46d7545171daec8abaf6e071928490fc792fb9972d614929747c54b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-xcx4r" podUID="a4d0cfcc-1589-4e5f-bdf4-ebe5ae5fb30f" May 13 00:22:44.935560 containerd[1446]: time="2025-05-13T00:22:44.935446783Z" level=error msg="Failed to destroy network for sandbox \"d09c07c414f1fafdc87511e808256d4caa2ec5b0528b87fd5d8ae7e24819a800\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:22:44.935924 containerd[1446]: time="2025-05-13T00:22:44.935883550Z" level=error msg="encountered an error cleaning up failed sandbox \"d09c07c414f1fafdc87511e808256d4caa2ec5b0528b87fd5d8ae7e24819a800\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:22:44.936132 containerd[1446]: time="2025-05-13T00:22:44.936009054Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-kpx49,Uid:5937a833-2917-44dc-be70-b888b2f1c194,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d09c07c414f1fafdc87511e808256d4caa2ec5b0528b87fd5d8ae7e24819a800\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:22:44.936254 kubelet[2539]: E0513 00:22:44.936219 2539 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d09c07c414f1fafdc87511e808256d4caa2ec5b0528b87fd5d8ae7e24819a800\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:22:44.936293 kubelet[2539]: E0513 00:22:44.936270 2539 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d09c07c414f1fafdc87511e808256d4caa2ec5b0528b87fd5d8ae7e24819a800\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-kpx49" May 13 00:22:44.936321 kubelet[2539]: E0513 00:22:44.936289 2539 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d09c07c414f1fafdc87511e808256d4caa2ec5b0528b87fd5d8ae7e24819a800\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-kpx49" May 13 00:22:44.936367 kubelet[2539]: E0513 00:22:44.936329 2539 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-kpx49_kube-system(5937a833-2917-44dc-be70-b888b2f1c194)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-kpx49_kube-system(5937a833-2917-44dc-be70-b888b2f1c194)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d09c07c414f1fafdc87511e808256d4caa2ec5b0528b87fd5d8ae7e24819a800\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-kpx49" podUID="5937a833-2917-44dc-be70-b888b2f1c194" May 13 00:22:44.973146 systemd[1]: Created slice kubepods-besteffort-pod8de2152a_6cd4_4599_a610_aac788d746cf.slice - libcontainer container kubepods-besteffort-pod8de2152a_6cd4_4599_a610_aac788d746cf.slice. May 13 00:22:44.975660 containerd[1446]: time="2025-05-13T00:22:44.975612949Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qpdwm,Uid:8de2152a-6cd4-4599-a610-aac788d746cf,Namespace:calico-system,Attempt:0,}" May 13 00:22:45.032024 containerd[1446]: time="2025-05-13T00:22:45.031973273Z" level=error msg="Failed to destroy network for sandbox \"eab8d741f67b79a2cd6380a374fe623436c2df296fd0ff2ad35066305fd06906\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:22:45.032331 containerd[1446]: time="2025-05-13T00:22:45.032302255Z" level=error msg="encountered an error cleaning up failed sandbox \"eab8d741f67b79a2cd6380a374fe623436c2df296fd0ff2ad35066305fd06906\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:22:45.032389 containerd[1446]: time="2025-05-13T00:22:45.032365947Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qpdwm,Uid:8de2152a-6cd4-4599-a610-aac788d746cf,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"eab8d741f67b79a2cd6380a374fe623436c2df296fd0ff2ad35066305fd06906\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:22:45.032677 kubelet[2539]: E0513 00:22:45.032597 2539 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eab8d741f67b79a2cd6380a374fe623436c2df296fd0ff2ad35066305fd06906\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:22:45.032737 kubelet[2539]: E0513 00:22:45.032681 2539 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eab8d741f67b79a2cd6380a374fe623436c2df296fd0ff2ad35066305fd06906\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-qpdwm" May 13 00:22:45.032737 kubelet[2539]: E0513 00:22:45.032703 2539 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eab8d741f67b79a2cd6380a374fe623436c2df296fd0ff2ad35066305fd06906\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-qpdwm" May 13 00:22:45.032787 kubelet[2539]: E0513 00:22:45.032747 2539 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-qpdwm_calico-system(8de2152a-6cd4-4599-a610-aac788d746cf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-qpdwm_calico-system(8de2152a-6cd4-4599-a610-aac788d746cf)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"eab8d741f67b79a2cd6380a374fe623436c2df296fd0ff2ad35066305fd06906\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-qpdwm" podUID="8de2152a-6cd4-4599-a610-aac788d746cf" May 13 00:22:45.073298 kubelet[2539]: I0513 00:22:45.072863 2539 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bfbcc49e1a3b0f71bf0e010cb7c9fe3095889e7036e2e864e58763235f929478" May 13 00:22:45.073663 containerd[1446]: time="2025-05-13T00:22:45.073416976Z" level=info msg="StopPodSandbox for \"bfbcc49e1a3b0f71bf0e010cb7c9fe3095889e7036e2e864e58763235f929478\"" May 13 00:22:45.073909 containerd[1446]: time="2025-05-13T00:22:45.073881064Z" level=info msg="Ensure that sandbox bfbcc49e1a3b0f71bf0e010cb7c9fe3095889e7036e2e864e58763235f929478 in task-service has been cleanup successfully" May 13 00:22:45.075491 kubelet[2539]: E0513 00:22:45.075461 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:22:45.077146 kubelet[2539]: I0513 00:22:45.076809 2539 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d09c07c414f1fafdc87511e808256d4caa2ec5b0528b87fd5d8ae7e24819a800" May 13 00:22:45.077234 containerd[1446]: time="2025-05-13T00:22:45.077171369Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\"" May 13 00:22:45.078616 containerd[1446]: time="2025-05-13T00:22:45.078580236Z" level=info msg="StopPodSandbox for \"d09c07c414f1fafdc87511e808256d4caa2ec5b0528b87fd5d8ae7e24819a800\"" May 13 00:22:45.078815 containerd[1446]: time="2025-05-13T00:22:45.078793316Z" level=info msg="Ensure that sandbox d09c07c414f1fafdc87511e808256d4caa2ec5b0528b87fd5d8ae7e24819a800 in task-service has been cleanup successfully" May 13 00:22:45.084863 kubelet[2539]: I0513 00:22:45.084371 2539 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0385badbc46d7545171daec8abaf6e071928490fc792fb9972d614929747c54b" May 13 00:22:45.085364 containerd[1446]: time="2025-05-13T00:22:45.084887473Z" level=info msg="StopPodSandbox for \"0385badbc46d7545171daec8abaf6e071928490fc792fb9972d614929747c54b\"" May 13 00:22:45.086127 containerd[1446]: time="2025-05-13T00:22:45.086086220Z" level=info msg="Ensure that sandbox 0385badbc46d7545171daec8abaf6e071928490fc792fb9972d614929747c54b in task-service has been cleanup successfully" May 13 00:22:45.087187 kubelet[2539]: I0513 00:22:45.087164 2539 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f3151aae7dd241cfa9cb458ba5dfd9e9b570461ce2c93ea539aa3d30677df77c" May 13 00:22:45.088547 containerd[1446]: time="2025-05-13T00:22:45.088502479Z" level=info msg="StopPodSandbox for \"f3151aae7dd241cfa9cb458ba5dfd9e9b570461ce2c93ea539aa3d30677df77c\"" May 13 00:22:45.089288 containerd[1446]: time="2025-05-13T00:22:45.089024178Z" level=info msg="Ensure that sandbox f3151aae7dd241cfa9cb458ba5dfd9e9b570461ce2c93ea539aa3d30677df77c in task-service has been cleanup successfully" May 13 00:22:45.090712 kubelet[2539]: I0513 00:22:45.090402 2539 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="87e79ddd483e5624ff50ef59ae827cf47ccadb3749c204e75b7230f340a69d07" May 13 00:22:45.092805 containerd[1446]: time="2025-05-13T00:22:45.092763687Z" level=info msg="StopPodSandbox for \"87e79ddd483e5624ff50ef59ae827cf47ccadb3749c204e75b7230f340a69d07\"" May 13 00:22:45.093822 containerd[1446]: time="2025-05-13T00:22:45.093712627Z" level=info msg="Ensure that sandbox 87e79ddd483e5624ff50ef59ae827cf47ccadb3749c204e75b7230f340a69d07 in task-service has been cleanup successfully" May 13 00:22:45.095296 kubelet[2539]: I0513 00:22:45.095260 2539 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eab8d741f67b79a2cd6380a374fe623436c2df296fd0ff2ad35066305fd06906" May 13 00:22:45.095848 containerd[1446]: time="2025-05-13T00:22:45.095801904Z" level=info msg="StopPodSandbox for \"eab8d741f67b79a2cd6380a374fe623436c2df296fd0ff2ad35066305fd06906\"" May 13 00:22:45.096098 containerd[1446]: time="2025-05-13T00:22:45.095995260Z" level=info msg="Ensure that sandbox eab8d741f67b79a2cd6380a374fe623436c2df296fd0ff2ad35066305fd06906 in task-service has been cleanup successfully" May 13 00:22:45.130310 containerd[1446]: time="2025-05-13T00:22:45.129934660Z" level=error msg="StopPodSandbox for \"bfbcc49e1a3b0f71bf0e010cb7c9fe3095889e7036e2e864e58763235f929478\" failed" error="failed to destroy network for sandbox \"bfbcc49e1a3b0f71bf0e010cb7c9fe3095889e7036e2e864e58763235f929478\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:22:45.134043 kubelet[2539]: E0513 00:22:45.133977 2539 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"bfbcc49e1a3b0f71bf0e010cb7c9fe3095889e7036e2e864e58763235f929478\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="bfbcc49e1a3b0f71bf0e010cb7c9fe3095889e7036e2e864e58763235f929478" May 13 00:22:45.134173 kubelet[2539]: E0513 00:22:45.134065 2539 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"bfbcc49e1a3b0f71bf0e010cb7c9fe3095889e7036e2e864e58763235f929478"} May 13 00:22:45.134173 kubelet[2539]: E0513 00:22:45.134130 2539 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"333d75f2-2d65-4963-bba9-b7a1ce798de8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bfbcc49e1a3b0f71bf0e010cb7c9fe3095889e7036e2e864e58763235f929478\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 13 00:22:45.134173 kubelet[2539]: E0513 00:22:45.134153 2539 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"333d75f2-2d65-4963-bba9-b7a1ce798de8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bfbcc49e1a3b0f71bf0e010cb7c9fe3095889e7036e2e864e58763235f929478\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-69855fccf-cpn7n" podUID="333d75f2-2d65-4963-bba9-b7a1ce798de8" May 13 00:22:45.145842 containerd[1446]: time="2025-05-13T00:22:45.145776866Z" level=error msg="StopPodSandbox for \"f3151aae7dd241cfa9cb458ba5dfd9e9b570461ce2c93ea539aa3d30677df77c\" failed" error="failed to destroy network for sandbox \"f3151aae7dd241cfa9cb458ba5dfd9e9b570461ce2c93ea539aa3d30677df77c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:22:45.146610 kubelet[2539]: E0513 00:22:45.146011 2539 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f3151aae7dd241cfa9cb458ba5dfd9e9b570461ce2c93ea539aa3d30677df77c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f3151aae7dd241cfa9cb458ba5dfd9e9b570461ce2c93ea539aa3d30677df77c" May 13 00:22:45.146610 kubelet[2539]: E0513 00:22:45.146062 2539 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f3151aae7dd241cfa9cb458ba5dfd9e9b570461ce2c93ea539aa3d30677df77c"} May 13 00:22:45.146610 kubelet[2539]: E0513 00:22:45.146095 2539 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"775b536a-9cd7-44ec-b10b-8740a8d0f7ab\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f3151aae7dd241cfa9cb458ba5dfd9e9b570461ce2c93ea539aa3d30677df77c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 13 00:22:45.146610 kubelet[2539]: E0513 00:22:45.146117 2539 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"775b536a-9cd7-44ec-b10b-8740a8d0f7ab\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f3151aae7dd241cfa9cb458ba5dfd9e9b570461ce2c93ea539aa3d30677df77c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6894c4f4db-g2hb4" podUID="775b536a-9cd7-44ec-b10b-8740a8d0f7ab" May 13 00:22:45.146826 kubelet[2539]: E0513 00:22:45.146784 2539 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0385badbc46d7545171daec8abaf6e071928490fc792fb9972d614929747c54b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0385badbc46d7545171daec8abaf6e071928490fc792fb9972d614929747c54b" May 13 00:22:45.146826 kubelet[2539]: E0513 00:22:45.146821 2539 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0385badbc46d7545171daec8abaf6e071928490fc792fb9972d614929747c54b"} May 13 00:22:45.146877 containerd[1446]: time="2025-05-13T00:22:45.146615625Z" level=error msg="StopPodSandbox for \"0385badbc46d7545171daec8abaf6e071928490fc792fb9972d614929747c54b\" failed" error="failed to destroy network for sandbox \"0385badbc46d7545171daec8abaf6e071928490fc792fb9972d614929747c54b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:22:45.146912 kubelet[2539]: E0513 00:22:45.146848 2539 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a4d0cfcc-1589-4e5f-bdf4-ebe5ae5fb30f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0385badbc46d7545171daec8abaf6e071928490fc792fb9972d614929747c54b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 13 00:22:45.146912 kubelet[2539]: E0513 00:22:45.146867 2539 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a4d0cfcc-1589-4e5f-bdf4-ebe5ae5fb30f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0385badbc46d7545171daec8abaf6e071928490fc792fb9972d614929747c54b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-xcx4r" podUID="a4d0cfcc-1589-4e5f-bdf4-ebe5ae5fb30f" May 13 00:22:45.155583 containerd[1446]: time="2025-05-13T00:22:45.155543159Z" level=error msg="StopPodSandbox for \"eab8d741f67b79a2cd6380a374fe623436c2df296fd0ff2ad35066305fd06906\" failed" error="failed to destroy network for sandbox \"eab8d741f67b79a2cd6380a374fe623436c2df296fd0ff2ad35066305fd06906\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:22:45.155805 kubelet[2539]: E0513 00:22:45.155771 2539 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"eab8d741f67b79a2cd6380a374fe623436c2df296fd0ff2ad35066305fd06906\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="eab8d741f67b79a2cd6380a374fe623436c2df296fd0ff2ad35066305fd06906" May 13 00:22:45.155859 kubelet[2539]: E0513 00:22:45.155815 2539 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"eab8d741f67b79a2cd6380a374fe623436c2df296fd0ff2ad35066305fd06906"} May 13 00:22:45.155859 kubelet[2539]: E0513 00:22:45.155844 2539 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8de2152a-6cd4-4599-a610-aac788d746cf\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"eab8d741f67b79a2cd6380a374fe623436c2df296fd0ff2ad35066305fd06906\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 13 00:22:45.155931 kubelet[2539]: E0513 00:22:45.155868 2539 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8de2152a-6cd4-4599-a610-aac788d746cf\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"eab8d741f67b79a2cd6380a374fe623436c2df296fd0ff2ad35066305fd06906\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-qpdwm" podUID="8de2152a-6cd4-4599-a610-aac788d746cf" May 13 00:22:45.155969 containerd[1446]: time="2025-05-13T00:22:45.155906868Z" level=error msg="StopPodSandbox for \"d09c07c414f1fafdc87511e808256d4caa2ec5b0528b87fd5d8ae7e24819a800\" failed" error="failed to destroy network for sandbox \"d09c07c414f1fafdc87511e808256d4caa2ec5b0528b87fd5d8ae7e24819a800\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:22:45.158719 kubelet[2539]: E0513 00:22:45.158564 2539 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d09c07c414f1fafdc87511e808256d4caa2ec5b0528b87fd5d8ae7e24819a800\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d09c07c414f1fafdc87511e808256d4caa2ec5b0528b87fd5d8ae7e24819a800" May 13 00:22:45.158719 kubelet[2539]: E0513 00:22:45.158613 2539 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d09c07c414f1fafdc87511e808256d4caa2ec5b0528b87fd5d8ae7e24819a800"} May 13 00:22:45.158719 kubelet[2539]: E0513 00:22:45.158644 2539 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5937a833-2917-44dc-be70-b888b2f1c194\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d09c07c414f1fafdc87511e808256d4caa2ec5b0528b87fd5d8ae7e24819a800\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 13 00:22:45.158719 kubelet[2539]: E0513 00:22:45.158673 2539 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5937a833-2917-44dc-be70-b888b2f1c194\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d09c07c414f1fafdc87511e808256d4caa2ec5b0528b87fd5d8ae7e24819a800\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-kpx49" podUID="5937a833-2917-44dc-be70-b888b2f1c194" May 13 00:22:45.159297 containerd[1446]: time="2025-05-13T00:22:45.159173528Z" level=error msg="StopPodSandbox for \"87e79ddd483e5624ff50ef59ae827cf47ccadb3749c204e75b7230f340a69d07\" failed" error="failed to destroy network for sandbox \"87e79ddd483e5624ff50ef59ae827cf47ccadb3749c204e75b7230f340a69d07\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:22:45.159368 kubelet[2539]: E0513 00:22:45.159334 2539 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"87e79ddd483e5624ff50ef59ae827cf47ccadb3749c204e75b7230f340a69d07\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="87e79ddd483e5624ff50ef59ae827cf47ccadb3749c204e75b7230f340a69d07" May 13 00:22:45.159415 kubelet[2539]: E0513 00:22:45.159374 2539 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"87e79ddd483e5624ff50ef59ae827cf47ccadb3749c204e75b7230f340a69d07"} May 13 00:22:45.159415 kubelet[2539]: E0513 00:22:45.159400 2539 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d1cea400-e767-4428-8f90-26704f1d5213\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"87e79ddd483e5624ff50ef59ae827cf47ccadb3749c204e75b7230f340a69d07\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 13 00:22:45.159476 kubelet[2539]: E0513 00:22:45.159418 2539 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d1cea400-e767-4428-8f90-26704f1d5213\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"87e79ddd483e5624ff50ef59ae827cf47ccadb3749c204e75b7230f340a69d07\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6894c4f4db-vq79h" podUID="d1cea400-e767-4428-8f90-26704f1d5213" May 13 00:22:45.890318 systemd[1]: Started sshd@7-10.0.0.71:22-10.0.0.1:44464.service - OpenSSH per-connection server daemon (10.0.0.1:44464). May 13 00:22:45.931727 sshd[3642]: Accepted publickey for core from 10.0.0.1 port 44464 ssh2: RSA SHA256:ilwLBGyeejLKSU0doRti0j2W4iQ88Tp+35jhkd0iwiU May 13 00:22:45.933246 sshd[3642]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:22:45.938642 systemd-logind[1422]: New session 8 of user core. May 13 00:22:45.942819 systemd[1]: Started session-8.scope - Session 8 of User core. May 13 00:22:46.061790 sshd[3642]: pam_unix(sshd:session): session closed for user core May 13 00:22:46.065126 systemd[1]: sshd@7-10.0.0.71:22-10.0.0.1:44464.service: Deactivated successfully. May 13 00:22:46.066798 systemd[1]: session-8.scope: Deactivated successfully. May 13 00:22:46.067406 systemd-logind[1422]: Session 8 logged out. Waiting for processes to exit. May 13 00:22:46.068257 systemd-logind[1422]: Removed session 8. May 13 00:22:49.094180 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount66723215.mount: Deactivated successfully. May 13 00:22:49.357588 containerd[1446]: time="2025-05-13T00:22:49.357465681Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:22:49.358707 containerd[1446]: time="2025-05-13T00:22:49.358601347Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.3: active requests=0, bytes read=138981893" May 13 00:22:49.359816 containerd[1446]: time="2025-05-13T00:22:49.359787822Z" level=info msg="ImageCreate event name:\"sha256:cdcce3ec4624a24c28cdc07b0ee29ddf6703628edee7452a3f8a8b4816bfd057\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:22:49.361722 containerd[1446]: time="2025-05-13T00:22:49.361682412Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:22:49.363221 containerd[1446]: time="2025-05-13T00:22:49.362827880Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.3\" with image id \"sha256:cdcce3ec4624a24c28cdc07b0ee29ddf6703628edee7452a3f8a8b4816bfd057\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\", size \"138981755\" in 4.285614744s" May 13 00:22:49.363221 containerd[1446]: time="2025-05-13T00:22:49.362873207Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\" returns image reference \"sha256:cdcce3ec4624a24c28cdc07b0ee29ddf6703628edee7452a3f8a8b4816bfd057\"" May 13 00:22:49.370064 containerd[1446]: time="2025-05-13T00:22:49.370026820Z" level=info msg="CreateContainer within sandbox \"468773c23983433f4b9050eb1fd4d957934787b0734b7d473c4f629680b44838\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" May 13 00:22:49.388646 containerd[1446]: time="2025-05-13T00:22:49.388598103Z" level=info msg="CreateContainer within sandbox \"468773c23983433f4b9050eb1fd4d957934787b0734b7d473c4f629680b44838\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"eaa55cc1f98149618f0c837fab892ffaf64d8c9ce286ecaf1cbb802053435f86\"" May 13 00:22:49.389659 containerd[1446]: time="2025-05-13T00:22:49.389480328Z" level=info msg="StartContainer for \"eaa55cc1f98149618f0c837fab892ffaf64d8c9ce286ecaf1cbb802053435f86\"" May 13 00:22:49.443866 systemd[1]: Started cri-containerd-eaa55cc1f98149618f0c837fab892ffaf64d8c9ce286ecaf1cbb802053435f86.scope - libcontainer container eaa55cc1f98149618f0c837fab892ffaf64d8c9ce286ecaf1cbb802053435f86. May 13 00:22:49.526544 containerd[1446]: time="2025-05-13T00:22:49.526492302Z" level=info msg="StartContainer for \"eaa55cc1f98149618f0c837fab892ffaf64d8c9ce286ecaf1cbb802053435f86\" returns successfully" May 13 00:22:49.669520 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. May 13 00:22:49.669707 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. May 13 00:22:50.111998 kubelet[2539]: E0513 00:22:50.111735 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:22:51.073399 systemd[1]: Started sshd@8-10.0.0.71:22-10.0.0.1:44474.service - OpenSSH per-connection server daemon (10.0.0.1:44474). May 13 00:22:51.112003 kubelet[2539]: E0513 00:22:51.111922 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:22:51.118706 sshd[3851]: Accepted publickey for core from 10.0.0.1 port 44474 ssh2: RSA SHA256:ilwLBGyeejLKSU0doRti0j2W4iQ88Tp+35jhkd0iwiU May 13 00:22:51.120184 sshd[3851]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:22:51.125530 systemd-logind[1422]: New session 9 of user core. May 13 00:22:51.130834 systemd[1]: Started session-9.scope - Session 9 of User core. May 13 00:22:51.250118 sshd[3851]: pam_unix(sshd:session): session closed for user core May 13 00:22:51.253771 systemd[1]: sshd@8-10.0.0.71:22-10.0.0.1:44474.service: Deactivated successfully. May 13 00:22:51.256125 systemd[1]: session-9.scope: Deactivated successfully. May 13 00:22:51.256917 systemd-logind[1422]: Session 9 logged out. Waiting for processes to exit. May 13 00:22:51.257684 systemd-logind[1422]: Removed session 9. May 13 00:22:54.231528 kubelet[2539]: I0513 00:22:54.231479 2539 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 13 00:22:54.232172 kubelet[2539]: E0513 00:22:54.232138 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:22:54.266317 kubelet[2539]: I0513 00:22:54.265777 2539 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-qpwwj" podStartSLOduration=5.423312515 podStartE2EDuration="19.265760017s" podCreationTimestamp="2025-05-13 00:22:35 +0000 UTC" firstStartedPulling="2025-05-13 00:22:35.521317531 +0000 UTC m=+20.633448310" lastFinishedPulling="2025-05-13 00:22:49.363765033 +0000 UTC m=+34.475895812" observedRunningTime="2025-05-13 00:22:50.123517553 +0000 UTC m=+35.235648332" watchObservedRunningTime="2025-05-13 00:22:54.265760017 +0000 UTC m=+39.377890796" May 13 00:22:55.119372 kubelet[2539]: E0513 00:22:55.119319 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:22:55.211698 kernel: bpftool[3989]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set May 13 00:22:55.410244 systemd-networkd[1376]: vxlan.calico: Link UP May 13 00:22:55.410250 systemd-networkd[1376]: vxlan.calico: Gained carrier May 13 00:22:56.263483 systemd[1]: Started sshd@9-10.0.0.71:22-10.0.0.1:33348.service - OpenSSH per-connection server daemon (10.0.0.1:33348). May 13 00:22:56.307739 sshd[4103]: Accepted publickey for core from 10.0.0.1 port 33348 ssh2: RSA SHA256:ilwLBGyeejLKSU0doRti0j2W4iQ88Tp+35jhkd0iwiU May 13 00:22:56.309176 sshd[4103]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:22:56.312695 systemd-logind[1422]: New session 10 of user core. May 13 00:22:56.324988 systemd[1]: Started session-10.scope - Session 10 of User core. May 13 00:22:56.453736 sshd[4103]: pam_unix(sshd:session): session closed for user core May 13 00:22:56.461555 systemd[1]: sshd@9-10.0.0.71:22-10.0.0.1:33348.service: Deactivated successfully. May 13 00:22:56.463170 systemd[1]: session-10.scope: Deactivated successfully. May 13 00:22:56.464743 systemd-logind[1422]: Session 10 logged out. Waiting for processes to exit. May 13 00:22:56.476972 systemd[1]: Started sshd@10-10.0.0.71:22-10.0.0.1:33358.service - OpenSSH per-connection server daemon (10.0.0.1:33358). May 13 00:22:56.478005 systemd-logind[1422]: Removed session 10. May 13 00:22:56.512266 sshd[4118]: Accepted publickey for core from 10.0.0.1 port 33358 ssh2: RSA SHA256:ilwLBGyeejLKSU0doRti0j2W4iQ88Tp+35jhkd0iwiU May 13 00:22:56.515274 sshd[4118]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:22:56.520285 systemd-logind[1422]: New session 11 of user core. May 13 00:22:56.530839 systemd[1]: Started session-11.scope - Session 11 of User core. May 13 00:22:56.690221 sshd[4118]: pam_unix(sshd:session): session closed for user core May 13 00:22:56.699308 systemd[1]: sshd@10-10.0.0.71:22-10.0.0.1:33358.service: Deactivated successfully. May 13 00:22:56.700696 systemd[1]: session-11.scope: Deactivated successfully. May 13 00:22:56.702714 systemd-logind[1422]: Session 11 logged out. Waiting for processes to exit. May 13 00:22:56.704451 systemd[1]: Started sshd@11-10.0.0.71:22-10.0.0.1:33372.service - OpenSSH per-connection server daemon (10.0.0.1:33372). May 13 00:22:56.705726 systemd-logind[1422]: Removed session 11. May 13 00:22:56.760609 sshd[4131]: Accepted publickey for core from 10.0.0.1 port 33372 ssh2: RSA SHA256:ilwLBGyeejLKSU0doRti0j2W4iQ88Tp+35jhkd0iwiU May 13 00:22:56.761079 sshd[4131]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:22:56.766690 systemd-logind[1422]: New session 12 of user core. May 13 00:22:56.776842 systemd[1]: Started session-12.scope - Session 12 of User core. May 13 00:22:56.898059 sshd[4131]: pam_unix(sshd:session): session closed for user core May 13 00:22:56.901801 systemd[1]: sshd@11-10.0.0.71:22-10.0.0.1:33372.service: Deactivated successfully. May 13 00:22:56.905281 systemd[1]: session-12.scope: Deactivated successfully. May 13 00:22:56.905954 systemd-logind[1422]: Session 12 logged out. Waiting for processes to exit. May 13 00:22:56.906788 systemd-logind[1422]: Removed session 12. May 13 00:22:56.975742 containerd[1446]: time="2025-05-13T00:22:56.970908327Z" level=info msg="StopPodSandbox for \"f3151aae7dd241cfa9cb458ba5dfd9e9b570461ce2c93ea539aa3d30677df77c\"" May 13 00:22:56.975742 containerd[1446]: time="2025-05-13T00:22:56.971221048Z" level=info msg="StopPodSandbox for \"d09c07c414f1fafdc87511e808256d4caa2ec5b0528b87fd5d8ae7e24819a800\"" May 13 00:22:57.107131 systemd-networkd[1376]: vxlan.calico: Gained IPv6LL May 13 00:22:57.270344 containerd[1446]: 2025-05-13 00:22:57.097 [INFO][4176] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d09c07c414f1fafdc87511e808256d4caa2ec5b0528b87fd5d8ae7e24819a800" May 13 00:22:57.270344 containerd[1446]: 2025-05-13 00:22:57.098 [INFO][4176] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d09c07c414f1fafdc87511e808256d4caa2ec5b0528b87fd5d8ae7e24819a800" iface="eth0" netns="/var/run/netns/cni-3edcb1be-db85-fef3-54d7-1241224741b8" May 13 00:22:57.270344 containerd[1446]: 2025-05-13 00:22:57.099 [INFO][4176] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d09c07c414f1fafdc87511e808256d4caa2ec5b0528b87fd5d8ae7e24819a800" iface="eth0" netns="/var/run/netns/cni-3edcb1be-db85-fef3-54d7-1241224741b8" May 13 00:22:57.270344 containerd[1446]: 2025-05-13 00:22:57.100 [INFO][4176] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d09c07c414f1fafdc87511e808256d4caa2ec5b0528b87fd5d8ae7e24819a800" iface="eth0" netns="/var/run/netns/cni-3edcb1be-db85-fef3-54d7-1241224741b8" May 13 00:22:57.270344 containerd[1446]: 2025-05-13 00:22:57.100 [INFO][4176] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d09c07c414f1fafdc87511e808256d4caa2ec5b0528b87fd5d8ae7e24819a800" May 13 00:22:57.270344 containerd[1446]: 2025-05-13 00:22:57.100 [INFO][4176] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d09c07c414f1fafdc87511e808256d4caa2ec5b0528b87fd5d8ae7e24819a800" May 13 00:22:57.270344 containerd[1446]: 2025-05-13 00:22:57.251 [INFO][4195] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d09c07c414f1fafdc87511e808256d4caa2ec5b0528b87fd5d8ae7e24819a800" HandleID="k8s-pod-network.d09c07c414f1fafdc87511e808256d4caa2ec5b0528b87fd5d8ae7e24819a800" Workload="localhost-k8s-coredns--7db6d8ff4d--kpx49-eth0" May 13 00:22:57.270344 containerd[1446]: 2025-05-13 00:22:57.251 [INFO][4195] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:22:57.270344 containerd[1446]: 2025-05-13 00:22:57.251 [INFO][4195] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:22:57.270344 containerd[1446]: 2025-05-13 00:22:57.263 [WARNING][4195] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d09c07c414f1fafdc87511e808256d4caa2ec5b0528b87fd5d8ae7e24819a800" HandleID="k8s-pod-network.d09c07c414f1fafdc87511e808256d4caa2ec5b0528b87fd5d8ae7e24819a800" Workload="localhost-k8s-coredns--7db6d8ff4d--kpx49-eth0" May 13 00:22:57.270344 containerd[1446]: 2025-05-13 00:22:57.263 [INFO][4195] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d09c07c414f1fafdc87511e808256d4caa2ec5b0528b87fd5d8ae7e24819a800" HandleID="k8s-pod-network.d09c07c414f1fafdc87511e808256d4caa2ec5b0528b87fd5d8ae7e24819a800" Workload="localhost-k8s-coredns--7db6d8ff4d--kpx49-eth0" May 13 00:22:57.270344 containerd[1446]: 2025-05-13 00:22:57.264 [INFO][4195] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:22:57.270344 containerd[1446]: 2025-05-13 00:22:57.268 [INFO][4176] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d09c07c414f1fafdc87511e808256d4caa2ec5b0528b87fd5d8ae7e24819a800" May 13 00:22:57.272281 systemd[1]: run-netns-cni\x2d3edcb1be\x2ddb85\x2dfef3\x2d54d7\x2d1241224741b8.mount: Deactivated successfully. May 13 00:22:57.274781 containerd[1446]: time="2025-05-13T00:22:57.270320140Z" level=info msg="TearDown network for sandbox \"d09c07c414f1fafdc87511e808256d4caa2ec5b0528b87fd5d8ae7e24819a800\" successfully" May 13 00:22:57.274781 containerd[1446]: time="2025-05-13T00:22:57.273759182Z" level=info msg="StopPodSandbox for \"d09c07c414f1fafdc87511e808256d4caa2ec5b0528b87fd5d8ae7e24819a800\" returns successfully" May 13 00:22:57.275663 kubelet[2539]: E0513 00:22:57.274283 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:22:57.276497 containerd[1446]: time="2025-05-13T00:22:57.276370237Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-kpx49,Uid:5937a833-2917-44dc-be70-b888b2f1c194,Namespace:kube-system,Attempt:1,}" May 13 00:22:57.280963 containerd[1446]: 2025-05-13 00:22:57.098 [INFO][4177] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f3151aae7dd241cfa9cb458ba5dfd9e9b570461ce2c93ea539aa3d30677df77c" May 13 00:22:57.280963 containerd[1446]: 2025-05-13 00:22:57.098 [INFO][4177] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f3151aae7dd241cfa9cb458ba5dfd9e9b570461ce2c93ea539aa3d30677df77c" iface="eth0" netns="/var/run/netns/cni-96e2e1a8-fd37-0068-6b7d-1e364f793c4e" May 13 00:22:57.280963 containerd[1446]: 2025-05-13 00:22:57.099 [INFO][4177] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f3151aae7dd241cfa9cb458ba5dfd9e9b570461ce2c93ea539aa3d30677df77c" iface="eth0" netns="/var/run/netns/cni-96e2e1a8-fd37-0068-6b7d-1e364f793c4e" May 13 00:22:57.280963 containerd[1446]: 2025-05-13 00:22:57.100 [INFO][4177] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f3151aae7dd241cfa9cb458ba5dfd9e9b570461ce2c93ea539aa3d30677df77c" iface="eth0" netns="/var/run/netns/cni-96e2e1a8-fd37-0068-6b7d-1e364f793c4e" May 13 00:22:57.280963 containerd[1446]: 2025-05-13 00:22:57.100 [INFO][4177] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f3151aae7dd241cfa9cb458ba5dfd9e9b570461ce2c93ea539aa3d30677df77c" May 13 00:22:57.280963 containerd[1446]: 2025-05-13 00:22:57.100 [INFO][4177] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f3151aae7dd241cfa9cb458ba5dfd9e9b570461ce2c93ea539aa3d30677df77c" May 13 00:22:57.280963 containerd[1446]: 2025-05-13 00:22:57.251 [INFO][4196] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f3151aae7dd241cfa9cb458ba5dfd9e9b570461ce2c93ea539aa3d30677df77c" HandleID="k8s-pod-network.f3151aae7dd241cfa9cb458ba5dfd9e9b570461ce2c93ea539aa3d30677df77c" Workload="localhost-k8s-calico--apiserver--6894c4f4db--g2hb4-eth0" May 13 00:22:57.280963 containerd[1446]: 2025-05-13 00:22:57.251 [INFO][4196] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:22:57.280963 containerd[1446]: 2025-05-13 00:22:57.264 [INFO][4196] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:22:57.280963 containerd[1446]: 2025-05-13 00:22:57.274 [WARNING][4196] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f3151aae7dd241cfa9cb458ba5dfd9e9b570461ce2c93ea539aa3d30677df77c" HandleID="k8s-pod-network.f3151aae7dd241cfa9cb458ba5dfd9e9b570461ce2c93ea539aa3d30677df77c" Workload="localhost-k8s-calico--apiserver--6894c4f4db--g2hb4-eth0" May 13 00:22:57.280963 containerd[1446]: 2025-05-13 00:22:57.274 [INFO][4196] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f3151aae7dd241cfa9cb458ba5dfd9e9b570461ce2c93ea539aa3d30677df77c" HandleID="k8s-pod-network.f3151aae7dd241cfa9cb458ba5dfd9e9b570461ce2c93ea539aa3d30677df77c" Workload="localhost-k8s-calico--apiserver--6894c4f4db--g2hb4-eth0" May 13 00:22:57.280963 containerd[1446]: 2025-05-13 00:22:57.275 [INFO][4196] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:22:57.280963 containerd[1446]: 2025-05-13 00:22:57.279 [INFO][4177] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f3151aae7dd241cfa9cb458ba5dfd9e9b570461ce2c93ea539aa3d30677df77c" May 13 00:22:57.281310 containerd[1446]: time="2025-05-13T00:22:57.281180015Z" level=info msg="TearDown network for sandbox \"f3151aae7dd241cfa9cb458ba5dfd9e9b570461ce2c93ea539aa3d30677df77c\" successfully" May 13 00:22:57.281310 containerd[1446]: time="2025-05-13T00:22:57.281203298Z" level=info msg="StopPodSandbox for \"f3151aae7dd241cfa9cb458ba5dfd9e9b570461ce2c93ea539aa3d30677df77c\" returns successfully" May 13 00:22:57.282854 containerd[1446]: time="2025-05-13T00:22:57.282814905Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6894c4f4db-g2hb4,Uid:775b536a-9cd7-44ec-b10b-8740a8d0f7ab,Namespace:calico-apiserver,Attempt:1,}" May 13 00:22:57.283475 systemd[1]: run-netns-cni\x2d96e2e1a8\x2dfd37\x2d0068\x2d6b7d\x2d1e364f793c4e.mount: Deactivated successfully. May 13 00:22:57.406845 systemd-networkd[1376]: cali3e060f72f4b: Link UP May 13 00:22:57.408554 systemd-networkd[1376]: cali3e060f72f4b: Gained carrier May 13 00:22:57.419305 containerd[1446]: 2025-05-13 00:22:57.331 [INFO][4210] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7db6d8ff4d--kpx49-eth0 coredns-7db6d8ff4d- kube-system 5937a833-2917-44dc-be70-b888b2f1c194 866 0 2025-05-13 00:22:29 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7db6d8ff4d-kpx49 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali3e060f72f4b [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="e4a8f8d3d74457761d9fb5da15648c49253793a507320df90876eaff24f43454" Namespace="kube-system" Pod="coredns-7db6d8ff4d-kpx49" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--kpx49-" May 13 00:22:57.419305 containerd[1446]: 2025-05-13 00:22:57.331 [INFO][4210] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="e4a8f8d3d74457761d9fb5da15648c49253793a507320df90876eaff24f43454" Namespace="kube-system" Pod="coredns-7db6d8ff4d-kpx49" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--kpx49-eth0" May 13 00:22:57.419305 containerd[1446]: 2025-05-13 00:22:57.358 [INFO][4240] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e4a8f8d3d74457761d9fb5da15648c49253793a507320df90876eaff24f43454" HandleID="k8s-pod-network.e4a8f8d3d74457761d9fb5da15648c49253793a507320df90876eaff24f43454" Workload="localhost-k8s-coredns--7db6d8ff4d--kpx49-eth0" May 13 00:22:57.419305 containerd[1446]: 2025-05-13 00:22:57.373 [INFO][4240] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e4a8f8d3d74457761d9fb5da15648c49253793a507320df90876eaff24f43454" HandleID="k8s-pod-network.e4a8f8d3d74457761d9fb5da15648c49253793a507320df90876eaff24f43454" Workload="localhost-k8s-coredns--7db6d8ff4d--kpx49-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000360af0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7db6d8ff4d-kpx49", "timestamp":"2025-05-13 00:22:57.358931125 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 13 00:22:57.419305 containerd[1446]: 2025-05-13 00:22:57.373 [INFO][4240] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:22:57.419305 containerd[1446]: 2025-05-13 00:22:57.373 [INFO][4240] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:22:57.419305 containerd[1446]: 2025-05-13 00:22:57.373 [INFO][4240] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 13 00:22:57.419305 containerd[1446]: 2025-05-13 00:22:57.375 [INFO][4240] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.e4a8f8d3d74457761d9fb5da15648c49253793a507320df90876eaff24f43454" host="localhost" May 13 00:22:57.419305 containerd[1446]: 2025-05-13 00:22:57.379 [INFO][4240] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 13 00:22:57.419305 containerd[1446]: 2025-05-13 00:22:57.384 [INFO][4240] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 13 00:22:57.419305 containerd[1446]: 2025-05-13 00:22:57.385 [INFO][4240] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 13 00:22:57.419305 containerd[1446]: 2025-05-13 00:22:57.388 [INFO][4240] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 13 00:22:57.419305 containerd[1446]: 2025-05-13 00:22:57.388 [INFO][4240] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e4a8f8d3d74457761d9fb5da15648c49253793a507320df90876eaff24f43454" host="localhost" May 13 00:22:57.419305 containerd[1446]: 2025-05-13 00:22:57.390 [INFO][4240] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.e4a8f8d3d74457761d9fb5da15648c49253793a507320df90876eaff24f43454 May 13 00:22:57.419305 containerd[1446]: 2025-05-13 00:22:57.393 [INFO][4240] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e4a8f8d3d74457761d9fb5da15648c49253793a507320df90876eaff24f43454" host="localhost" May 13 00:22:57.419305 containerd[1446]: 2025-05-13 00:22:57.398 [INFO][4240] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.e4a8f8d3d74457761d9fb5da15648c49253793a507320df90876eaff24f43454" host="localhost" May 13 00:22:57.419305 containerd[1446]: 2025-05-13 00:22:57.398 [INFO][4240] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.e4a8f8d3d74457761d9fb5da15648c49253793a507320df90876eaff24f43454" host="localhost" May 13 00:22:57.419305 containerd[1446]: 2025-05-13 00:22:57.398 [INFO][4240] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:22:57.419305 containerd[1446]: 2025-05-13 00:22:57.398 [INFO][4240] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="e4a8f8d3d74457761d9fb5da15648c49253793a507320df90876eaff24f43454" HandleID="k8s-pod-network.e4a8f8d3d74457761d9fb5da15648c49253793a507320df90876eaff24f43454" Workload="localhost-k8s-coredns--7db6d8ff4d--kpx49-eth0" May 13 00:22:57.420085 containerd[1446]: 2025-05-13 00:22:57.400 [INFO][4210] cni-plugin/k8s.go 386: Populated endpoint ContainerID="e4a8f8d3d74457761d9fb5da15648c49253793a507320df90876eaff24f43454" Namespace="kube-system" Pod="coredns-7db6d8ff4d-kpx49" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--kpx49-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--kpx49-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"5937a833-2917-44dc-be70-b888b2f1c194", ResourceVersion:"866", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 22, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7db6d8ff4d-kpx49", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3e060f72f4b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:22:57.420085 containerd[1446]: 2025-05-13 00:22:57.400 [INFO][4210] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="e4a8f8d3d74457761d9fb5da15648c49253793a507320df90876eaff24f43454" Namespace="kube-system" Pod="coredns-7db6d8ff4d-kpx49" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--kpx49-eth0" May 13 00:22:57.420085 containerd[1446]: 2025-05-13 00:22:57.400 [INFO][4210] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3e060f72f4b ContainerID="e4a8f8d3d74457761d9fb5da15648c49253793a507320df90876eaff24f43454" Namespace="kube-system" Pod="coredns-7db6d8ff4d-kpx49" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--kpx49-eth0" May 13 00:22:57.420085 containerd[1446]: 2025-05-13 00:22:57.405 [INFO][4210] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e4a8f8d3d74457761d9fb5da15648c49253793a507320df90876eaff24f43454" Namespace="kube-system" Pod="coredns-7db6d8ff4d-kpx49" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--kpx49-eth0" May 13 00:22:57.420085 containerd[1446]: 2025-05-13 00:22:57.406 [INFO][4210] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="e4a8f8d3d74457761d9fb5da15648c49253793a507320df90876eaff24f43454" Namespace="kube-system" Pod="coredns-7db6d8ff4d-kpx49" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--kpx49-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--kpx49-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"5937a833-2917-44dc-be70-b888b2f1c194", ResourceVersion:"866", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 22, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e4a8f8d3d74457761d9fb5da15648c49253793a507320df90876eaff24f43454", Pod:"coredns-7db6d8ff4d-kpx49", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3e060f72f4b", MAC:"aa:2e:d5:32:37:20", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:22:57.420085 containerd[1446]: 2025-05-13 00:22:57.415 [INFO][4210] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="e4a8f8d3d74457761d9fb5da15648c49253793a507320df90876eaff24f43454" Namespace="kube-system" Pod="coredns-7db6d8ff4d-kpx49" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--kpx49-eth0" May 13 00:22:57.442868 systemd-networkd[1376]: califd0c9bb2fd7: Link UP May 13 00:22:57.443371 systemd-networkd[1376]: califd0c9bb2fd7: Gained carrier May 13 00:22:57.458727 containerd[1446]: 2025-05-13 00:22:57.331 [INFO][4211] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6894c4f4db--g2hb4-eth0 calico-apiserver-6894c4f4db- calico-apiserver 775b536a-9cd7-44ec-b10b-8740a8d0f7ab 865 0 2025-05-13 00:22:35 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6894c4f4db projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6894c4f4db-g2hb4 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] califd0c9bb2fd7 [] []}} ContainerID="d00ded3b14fbd10dfbcb1340c6cc22d0415f507546be047139fec26dabcc70a2" Namespace="calico-apiserver" Pod="calico-apiserver-6894c4f4db-g2hb4" WorkloadEndpoint="localhost-k8s-calico--apiserver--6894c4f4db--g2hb4-" May 13 00:22:57.458727 containerd[1446]: 2025-05-13 00:22:57.331 [INFO][4211] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="d00ded3b14fbd10dfbcb1340c6cc22d0415f507546be047139fec26dabcc70a2" Namespace="calico-apiserver" Pod="calico-apiserver-6894c4f4db-g2hb4" WorkloadEndpoint="localhost-k8s-calico--apiserver--6894c4f4db--g2hb4-eth0" May 13 00:22:57.458727 containerd[1446]: 2025-05-13 00:22:57.357 [INFO][4239] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d00ded3b14fbd10dfbcb1340c6cc22d0415f507546be047139fec26dabcc70a2" HandleID="k8s-pod-network.d00ded3b14fbd10dfbcb1340c6cc22d0415f507546be047139fec26dabcc70a2" Workload="localhost-k8s-calico--apiserver--6894c4f4db--g2hb4-eth0" May 13 00:22:57.458727 containerd[1446]: 2025-05-13 00:22:57.373 [INFO][4239] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d00ded3b14fbd10dfbcb1340c6cc22d0415f507546be047139fec26dabcc70a2" HandleID="k8s-pod-network.d00ded3b14fbd10dfbcb1340c6cc22d0415f507546be047139fec26dabcc70a2" Workload="localhost-k8s-calico--apiserver--6894c4f4db--g2hb4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000289d70), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6894c4f4db-g2hb4", "timestamp":"2025-05-13 00:22:57.357923755 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 13 00:22:57.458727 containerd[1446]: 2025-05-13 00:22:57.373 [INFO][4239] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:22:57.458727 containerd[1446]: 2025-05-13 00:22:57.399 [INFO][4239] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:22:57.458727 containerd[1446]: 2025-05-13 00:22:57.399 [INFO][4239] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 13 00:22:57.458727 containerd[1446]: 2025-05-13 00:22:57.401 [INFO][4239] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.d00ded3b14fbd10dfbcb1340c6cc22d0415f507546be047139fec26dabcc70a2" host="localhost" May 13 00:22:57.458727 containerd[1446]: 2025-05-13 00:22:57.411 [INFO][4239] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 13 00:22:57.458727 containerd[1446]: 2025-05-13 00:22:57.421 [INFO][4239] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 13 00:22:57.458727 containerd[1446]: 2025-05-13 00:22:57.424 [INFO][4239] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 13 00:22:57.458727 containerd[1446]: 2025-05-13 00:22:57.426 [INFO][4239] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 13 00:22:57.458727 containerd[1446]: 2025-05-13 00:22:57.426 [INFO][4239] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d00ded3b14fbd10dfbcb1340c6cc22d0415f507546be047139fec26dabcc70a2" host="localhost" May 13 00:22:57.458727 containerd[1446]: 2025-05-13 00:22:57.428 [INFO][4239] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.d00ded3b14fbd10dfbcb1340c6cc22d0415f507546be047139fec26dabcc70a2 May 13 00:22:57.458727 containerd[1446]: 2025-05-13 00:22:57.432 [INFO][4239] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d00ded3b14fbd10dfbcb1340c6cc22d0415f507546be047139fec26dabcc70a2" host="localhost" May 13 00:22:57.458727 containerd[1446]: 2025-05-13 00:22:57.437 [INFO][4239] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.d00ded3b14fbd10dfbcb1340c6cc22d0415f507546be047139fec26dabcc70a2" host="localhost" May 13 00:22:57.458727 containerd[1446]: 2025-05-13 00:22:57.437 [INFO][4239] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.d00ded3b14fbd10dfbcb1340c6cc22d0415f507546be047139fec26dabcc70a2" host="localhost" May 13 00:22:57.458727 containerd[1446]: 2025-05-13 00:22:57.437 [INFO][4239] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:22:57.458727 containerd[1446]: 2025-05-13 00:22:57.437 [INFO][4239] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="d00ded3b14fbd10dfbcb1340c6cc22d0415f507546be047139fec26dabcc70a2" HandleID="k8s-pod-network.d00ded3b14fbd10dfbcb1340c6cc22d0415f507546be047139fec26dabcc70a2" Workload="localhost-k8s-calico--apiserver--6894c4f4db--g2hb4-eth0" May 13 00:22:57.459323 containerd[1446]: 2025-05-13 00:22:57.440 [INFO][4211] cni-plugin/k8s.go 386: Populated endpoint ContainerID="d00ded3b14fbd10dfbcb1340c6cc22d0415f507546be047139fec26dabcc70a2" Namespace="calico-apiserver" Pod="calico-apiserver-6894c4f4db-g2hb4" WorkloadEndpoint="localhost-k8s-calico--apiserver--6894c4f4db--g2hb4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6894c4f4db--g2hb4-eth0", GenerateName:"calico-apiserver-6894c4f4db-", Namespace:"calico-apiserver", SelfLink:"", UID:"775b536a-9cd7-44ec-b10b-8740a8d0f7ab", ResourceVersion:"865", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 22, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6894c4f4db", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6894c4f4db-g2hb4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"califd0c9bb2fd7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:22:57.459323 containerd[1446]: 2025-05-13 00:22:57.440 [INFO][4211] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="d00ded3b14fbd10dfbcb1340c6cc22d0415f507546be047139fec26dabcc70a2" Namespace="calico-apiserver" Pod="calico-apiserver-6894c4f4db-g2hb4" WorkloadEndpoint="localhost-k8s-calico--apiserver--6894c4f4db--g2hb4-eth0" May 13 00:22:57.459323 containerd[1446]: 2025-05-13 00:22:57.440 [INFO][4211] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califd0c9bb2fd7 ContainerID="d00ded3b14fbd10dfbcb1340c6cc22d0415f507546be047139fec26dabcc70a2" Namespace="calico-apiserver" Pod="calico-apiserver-6894c4f4db-g2hb4" WorkloadEndpoint="localhost-k8s-calico--apiserver--6894c4f4db--g2hb4-eth0" May 13 00:22:57.459323 containerd[1446]: 2025-05-13 00:22:57.443 [INFO][4211] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d00ded3b14fbd10dfbcb1340c6cc22d0415f507546be047139fec26dabcc70a2" Namespace="calico-apiserver" Pod="calico-apiserver-6894c4f4db-g2hb4" WorkloadEndpoint="localhost-k8s-calico--apiserver--6894c4f4db--g2hb4-eth0" May 13 00:22:57.459323 containerd[1446]: 2025-05-13 00:22:57.444 [INFO][4211] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="d00ded3b14fbd10dfbcb1340c6cc22d0415f507546be047139fec26dabcc70a2" Namespace="calico-apiserver" Pod="calico-apiserver-6894c4f4db-g2hb4" WorkloadEndpoint="localhost-k8s-calico--apiserver--6894c4f4db--g2hb4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6894c4f4db--g2hb4-eth0", GenerateName:"calico-apiserver-6894c4f4db-", Namespace:"calico-apiserver", SelfLink:"", UID:"775b536a-9cd7-44ec-b10b-8740a8d0f7ab", ResourceVersion:"865", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 22, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6894c4f4db", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d00ded3b14fbd10dfbcb1340c6cc22d0415f507546be047139fec26dabcc70a2", Pod:"calico-apiserver-6894c4f4db-g2hb4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"califd0c9bb2fd7", MAC:"46:65:50:85:f1:18", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:22:57.459323 containerd[1446]: 2025-05-13 00:22:57.455 [INFO][4211] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="d00ded3b14fbd10dfbcb1340c6cc22d0415f507546be047139fec26dabcc70a2" Namespace="calico-apiserver" Pod="calico-apiserver-6894c4f4db-g2hb4" WorkloadEndpoint="localhost-k8s-calico--apiserver--6894c4f4db--g2hb4-eth0" May 13 00:22:57.467593 containerd[1446]: time="2025-05-13T00:22:57.467510235Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:22:57.467828 containerd[1446]: time="2025-05-13T00:22:57.467783030Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:22:57.467941 containerd[1446]: time="2025-05-13T00:22:57.467828396Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:22:57.468519 containerd[1446]: time="2025-05-13T00:22:57.468370905Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:22:57.481551 containerd[1446]: time="2025-05-13T00:22:57.481133425Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:22:57.481551 containerd[1446]: time="2025-05-13T00:22:57.481387498Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:22:57.481551 containerd[1446]: time="2025-05-13T00:22:57.481400820Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:22:57.481884 containerd[1446]: time="2025-05-13T00:22:57.481802831Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:22:57.491820 systemd[1]: Started cri-containerd-e4a8f8d3d74457761d9fb5da15648c49253793a507320df90876eaff24f43454.scope - libcontainer container e4a8f8d3d74457761d9fb5da15648c49253793a507320df90876eaff24f43454. May 13 00:22:57.494682 systemd[1]: Started cri-containerd-d00ded3b14fbd10dfbcb1340c6cc22d0415f507546be047139fec26dabcc70a2.scope - libcontainer container d00ded3b14fbd10dfbcb1340c6cc22d0415f507546be047139fec26dabcc70a2. May 13 00:22:57.505937 systemd-resolved[1310]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 13 00:22:57.510910 systemd-resolved[1310]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 13 00:22:57.526242 containerd[1446]: time="2025-05-13T00:22:57.526184133Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-kpx49,Uid:5937a833-2917-44dc-be70-b888b2f1c194,Namespace:kube-system,Attempt:1,} returns sandbox id \"e4a8f8d3d74457761d9fb5da15648c49253793a507320df90876eaff24f43454\"" May 13 00:22:57.526972 kubelet[2539]: E0513 00:22:57.526950 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:22:57.529191 containerd[1446]: time="2025-05-13T00:22:57.529152715Z" level=info msg="CreateContainer within sandbox \"e4a8f8d3d74457761d9fb5da15648c49253793a507320df90876eaff24f43454\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 13 00:22:57.536994 containerd[1446]: time="2025-05-13T00:22:57.536954597Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6894c4f4db-g2hb4,Uid:775b536a-9cd7-44ec-b10b-8740a8d0f7ab,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"d00ded3b14fbd10dfbcb1340c6cc22d0415f507546be047139fec26dabcc70a2\"" May 13 00:22:57.539111 containerd[1446]: time="2025-05-13T00:22:57.539070829Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" May 13 00:22:57.548016 containerd[1446]: time="2025-05-13T00:22:57.547973733Z" level=info msg="CreateContainer within sandbox \"e4a8f8d3d74457761d9fb5da15648c49253793a507320df90876eaff24f43454\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0efc3766054ea4844d9d6f809706078a4779984bb9eac9a112eaa5e66a148092\"" May 13 00:22:57.548480 containerd[1446]: time="2025-05-13T00:22:57.548451114Z" level=info msg="StartContainer for \"0efc3766054ea4844d9d6f809706078a4779984bb9eac9a112eaa5e66a148092\"" May 13 00:22:57.572810 systemd[1]: Started cri-containerd-0efc3766054ea4844d9d6f809706078a4779984bb9eac9a112eaa5e66a148092.scope - libcontainer container 0efc3766054ea4844d9d6f809706078a4779984bb9eac9a112eaa5e66a148092. May 13 00:22:57.596348 containerd[1446]: time="2025-05-13T00:22:57.596114078Z" level=info msg="StartContainer for \"0efc3766054ea4844d9d6f809706078a4779984bb9eac9a112eaa5e66a148092\" returns successfully" May 13 00:22:57.967533 containerd[1446]: time="2025-05-13T00:22:57.967494593Z" level=info msg="StopPodSandbox for \"0385badbc46d7545171daec8abaf6e071928490fc792fb9972d614929747c54b\"" May 13 00:22:58.042034 containerd[1446]: 2025-05-13 00:22:58.009 [INFO][4419] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="0385badbc46d7545171daec8abaf6e071928490fc792fb9972d614929747c54b" May 13 00:22:58.042034 containerd[1446]: 2025-05-13 00:22:58.010 [INFO][4419] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0385badbc46d7545171daec8abaf6e071928490fc792fb9972d614929747c54b" iface="eth0" netns="/var/run/netns/cni-01ad272c-16b4-a6d6-b4d9-f37a639eae68" May 13 00:22:58.042034 containerd[1446]: 2025-05-13 00:22:58.010 [INFO][4419] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0385badbc46d7545171daec8abaf6e071928490fc792fb9972d614929747c54b" iface="eth0" netns="/var/run/netns/cni-01ad272c-16b4-a6d6-b4d9-f37a639eae68" May 13 00:22:58.042034 containerd[1446]: 2025-05-13 00:22:58.010 [INFO][4419] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0385badbc46d7545171daec8abaf6e071928490fc792fb9972d614929747c54b" iface="eth0" netns="/var/run/netns/cni-01ad272c-16b4-a6d6-b4d9-f37a639eae68" May 13 00:22:58.042034 containerd[1446]: 2025-05-13 00:22:58.010 [INFO][4419] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="0385badbc46d7545171daec8abaf6e071928490fc792fb9972d614929747c54b" May 13 00:22:58.042034 containerd[1446]: 2025-05-13 00:22:58.010 [INFO][4419] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0385badbc46d7545171daec8abaf6e071928490fc792fb9972d614929747c54b" May 13 00:22:58.042034 containerd[1446]: 2025-05-13 00:22:58.029 [INFO][4428] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0385badbc46d7545171daec8abaf6e071928490fc792fb9972d614929747c54b" HandleID="k8s-pod-network.0385badbc46d7545171daec8abaf6e071928490fc792fb9972d614929747c54b" Workload="localhost-k8s-coredns--7db6d8ff4d--xcx4r-eth0" May 13 00:22:58.042034 containerd[1446]: 2025-05-13 00:22:58.029 [INFO][4428] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:22:58.042034 containerd[1446]: 2025-05-13 00:22:58.029 [INFO][4428] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:22:58.042034 containerd[1446]: 2025-05-13 00:22:58.037 [WARNING][4428] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0385badbc46d7545171daec8abaf6e071928490fc792fb9972d614929747c54b" HandleID="k8s-pod-network.0385badbc46d7545171daec8abaf6e071928490fc792fb9972d614929747c54b" Workload="localhost-k8s-coredns--7db6d8ff4d--xcx4r-eth0" May 13 00:22:58.042034 containerd[1446]: 2025-05-13 00:22:58.037 [INFO][4428] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0385badbc46d7545171daec8abaf6e071928490fc792fb9972d614929747c54b" HandleID="k8s-pod-network.0385badbc46d7545171daec8abaf6e071928490fc792fb9972d614929747c54b" Workload="localhost-k8s-coredns--7db6d8ff4d--xcx4r-eth0" May 13 00:22:58.042034 containerd[1446]: 2025-05-13 00:22:58.038 [INFO][4428] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:22:58.042034 containerd[1446]: 2025-05-13 00:22:58.040 [INFO][4419] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="0385badbc46d7545171daec8abaf6e071928490fc792fb9972d614929747c54b" May 13 00:22:58.042841 containerd[1446]: time="2025-05-13T00:22:58.042165771Z" level=info msg="TearDown network for sandbox \"0385badbc46d7545171daec8abaf6e071928490fc792fb9972d614929747c54b\" successfully" May 13 00:22:58.042841 containerd[1446]: time="2025-05-13T00:22:58.042201736Z" level=info msg="StopPodSandbox for \"0385badbc46d7545171daec8abaf6e071928490fc792fb9972d614929747c54b\" returns successfully" May 13 00:22:58.042895 kubelet[2539]: E0513 00:22:58.042542 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:22:58.043575 containerd[1446]: time="2025-05-13T00:22:58.043203101Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-xcx4r,Uid:a4d0cfcc-1589-4e5f-bdf4-ebe5ae5fb30f,Namespace:kube-system,Attempt:1,}" May 13 00:22:58.129816 kubelet[2539]: E0513 00:22:58.129762 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:22:58.157721 kubelet[2539]: I0513 00:22:58.157378 2539 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-kpx49" podStartSLOduration=29.157358154 podStartE2EDuration="29.157358154s" podCreationTimestamp="2025-05-13 00:22:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 00:22:58.142430645 +0000 UTC m=+43.254561424" watchObservedRunningTime="2025-05-13 00:22:58.157358154 +0000 UTC m=+43.269488933" May 13 00:22:58.163274 systemd-networkd[1376]: calic1f51f62b57: Link UP May 13 00:22:58.164127 systemd-networkd[1376]: calic1f51f62b57: Gained carrier May 13 00:22:58.175050 containerd[1446]: 2025-05-13 00:22:58.082 [INFO][4437] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7db6d8ff4d--xcx4r-eth0 coredns-7db6d8ff4d- kube-system a4d0cfcc-1589-4e5f-bdf4-ebe5ae5fb30f 887 0 2025-05-13 00:22:29 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7db6d8ff4d-xcx4r eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calic1f51f62b57 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="b30bec14e4d5000eb5693d1cb72086bad62218a7b8f362542e33498cdc8e3de0" Namespace="kube-system" Pod="coredns-7db6d8ff4d-xcx4r" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--xcx4r-" May 13 00:22:58.175050 containerd[1446]: 2025-05-13 00:22:58.083 [INFO][4437] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="b30bec14e4d5000eb5693d1cb72086bad62218a7b8f362542e33498cdc8e3de0" Namespace="kube-system" Pod="coredns-7db6d8ff4d-xcx4r" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--xcx4r-eth0" May 13 00:22:58.175050 containerd[1446]: 2025-05-13 00:22:58.108 [INFO][4452] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b30bec14e4d5000eb5693d1cb72086bad62218a7b8f362542e33498cdc8e3de0" HandleID="k8s-pod-network.b30bec14e4d5000eb5693d1cb72086bad62218a7b8f362542e33498cdc8e3de0" Workload="localhost-k8s-coredns--7db6d8ff4d--xcx4r-eth0" May 13 00:22:58.175050 containerd[1446]: 2025-05-13 00:22:58.118 [INFO][4452] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b30bec14e4d5000eb5693d1cb72086bad62218a7b8f362542e33498cdc8e3de0" HandleID="k8s-pod-network.b30bec14e4d5000eb5693d1cb72086bad62218a7b8f362542e33498cdc8e3de0" Workload="localhost-k8s-coredns--7db6d8ff4d--xcx4r-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000576e80), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7db6d8ff4d-xcx4r", "timestamp":"2025-05-13 00:22:58.10804966 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 13 00:22:58.175050 containerd[1446]: 2025-05-13 00:22:58.118 [INFO][4452] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:22:58.175050 containerd[1446]: 2025-05-13 00:22:58.118 [INFO][4452] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:22:58.175050 containerd[1446]: 2025-05-13 00:22:58.118 [INFO][4452] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 13 00:22:58.175050 containerd[1446]: 2025-05-13 00:22:58.120 [INFO][4452] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.b30bec14e4d5000eb5693d1cb72086bad62218a7b8f362542e33498cdc8e3de0" host="localhost" May 13 00:22:58.175050 containerd[1446]: 2025-05-13 00:22:58.123 [INFO][4452] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 13 00:22:58.175050 containerd[1446]: 2025-05-13 00:22:58.128 [INFO][4452] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 13 00:22:58.175050 containerd[1446]: 2025-05-13 00:22:58.131 [INFO][4452] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 13 00:22:58.175050 containerd[1446]: 2025-05-13 00:22:58.134 [INFO][4452] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 13 00:22:58.175050 containerd[1446]: 2025-05-13 00:22:58.134 [INFO][4452] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b30bec14e4d5000eb5693d1cb72086bad62218a7b8f362542e33498cdc8e3de0" host="localhost" May 13 00:22:58.175050 containerd[1446]: 2025-05-13 00:22:58.136 [INFO][4452] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.b30bec14e4d5000eb5693d1cb72086bad62218a7b8f362542e33498cdc8e3de0 May 13 00:22:58.175050 containerd[1446]: 2025-05-13 00:22:58.143 [INFO][4452] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b30bec14e4d5000eb5693d1cb72086bad62218a7b8f362542e33498cdc8e3de0" host="localhost" May 13 00:22:58.175050 containerd[1446]: 2025-05-13 00:22:58.154 [INFO][4452] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.b30bec14e4d5000eb5693d1cb72086bad62218a7b8f362542e33498cdc8e3de0" host="localhost" May 13 00:22:58.175050 containerd[1446]: 2025-05-13 00:22:58.155 [INFO][4452] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.b30bec14e4d5000eb5693d1cb72086bad62218a7b8f362542e33498cdc8e3de0" host="localhost" May 13 00:22:58.175050 containerd[1446]: 2025-05-13 00:22:58.155 [INFO][4452] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:22:58.175050 containerd[1446]: 2025-05-13 00:22:58.155 [INFO][4452] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="b30bec14e4d5000eb5693d1cb72086bad62218a7b8f362542e33498cdc8e3de0" HandleID="k8s-pod-network.b30bec14e4d5000eb5693d1cb72086bad62218a7b8f362542e33498cdc8e3de0" Workload="localhost-k8s-coredns--7db6d8ff4d--xcx4r-eth0" May 13 00:22:58.175840 containerd[1446]: 2025-05-13 00:22:58.158 [INFO][4437] cni-plugin/k8s.go 386: Populated endpoint ContainerID="b30bec14e4d5000eb5693d1cb72086bad62218a7b8f362542e33498cdc8e3de0" Namespace="kube-system" Pod="coredns-7db6d8ff4d-xcx4r" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--xcx4r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--xcx4r-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"a4d0cfcc-1589-4e5f-bdf4-ebe5ae5fb30f", ResourceVersion:"887", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 22, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7db6d8ff4d-xcx4r", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic1f51f62b57", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:22:58.175840 containerd[1446]: 2025-05-13 00:22:58.159 [INFO][4437] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="b30bec14e4d5000eb5693d1cb72086bad62218a7b8f362542e33498cdc8e3de0" Namespace="kube-system" Pod="coredns-7db6d8ff4d-xcx4r" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--xcx4r-eth0" May 13 00:22:58.175840 containerd[1446]: 2025-05-13 00:22:58.159 [INFO][4437] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic1f51f62b57 ContainerID="b30bec14e4d5000eb5693d1cb72086bad62218a7b8f362542e33498cdc8e3de0" Namespace="kube-system" Pod="coredns-7db6d8ff4d-xcx4r" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--xcx4r-eth0" May 13 00:22:58.175840 containerd[1446]: 2025-05-13 00:22:58.163 [INFO][4437] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b30bec14e4d5000eb5693d1cb72086bad62218a7b8f362542e33498cdc8e3de0" Namespace="kube-system" Pod="coredns-7db6d8ff4d-xcx4r" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--xcx4r-eth0" May 13 00:22:58.175840 containerd[1446]: 2025-05-13 00:22:58.164 [INFO][4437] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="b30bec14e4d5000eb5693d1cb72086bad62218a7b8f362542e33498cdc8e3de0" Namespace="kube-system" Pod="coredns-7db6d8ff4d-xcx4r" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--xcx4r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--xcx4r-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"a4d0cfcc-1589-4e5f-bdf4-ebe5ae5fb30f", ResourceVersion:"887", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 22, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b30bec14e4d5000eb5693d1cb72086bad62218a7b8f362542e33498cdc8e3de0", Pod:"coredns-7db6d8ff4d-xcx4r", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic1f51f62b57", MAC:"e2:b9:9d:bb:95:4a", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:22:58.175840 containerd[1446]: 2025-05-13 00:22:58.172 [INFO][4437] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="b30bec14e4d5000eb5693d1cb72086bad62218a7b8f362542e33498cdc8e3de0" Namespace="kube-system" Pod="coredns-7db6d8ff4d-xcx4r" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--xcx4r-eth0" May 13 00:22:58.197836 containerd[1446]: time="2025-05-13T00:22:58.197756612Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:22:58.197836 containerd[1446]: time="2025-05-13T00:22:58.197804658Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:22:58.197836 containerd[1446]: time="2025-05-13T00:22:58.197816620Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:22:58.198059 containerd[1446]: time="2025-05-13T00:22:58.197885828Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:22:58.218857 systemd[1]: Started cri-containerd-b30bec14e4d5000eb5693d1cb72086bad62218a7b8f362542e33498cdc8e3de0.scope - libcontainer container b30bec14e4d5000eb5693d1cb72086bad62218a7b8f362542e33498cdc8e3de0. May 13 00:22:58.228296 systemd-resolved[1310]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 13 00:22:58.246330 containerd[1446]: time="2025-05-13T00:22:58.246288849Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-xcx4r,Uid:a4d0cfcc-1589-4e5f-bdf4-ebe5ae5fb30f,Namespace:kube-system,Attempt:1,} returns sandbox id \"b30bec14e4d5000eb5693d1cb72086bad62218a7b8f362542e33498cdc8e3de0\"" May 13 00:22:58.247015 kubelet[2539]: E0513 00:22:58.246989 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:22:58.250522 containerd[1446]: time="2025-05-13T00:22:58.250479893Z" level=info msg="CreateContainer within sandbox \"b30bec14e4d5000eb5693d1cb72086bad62218a7b8f362542e33498cdc8e3de0\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 13 00:22:58.265105 containerd[1446]: time="2025-05-13T00:22:58.264992591Z" level=info msg="CreateContainer within sandbox \"b30bec14e4d5000eb5693d1cb72086bad62218a7b8f362542e33498cdc8e3de0\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1d66566ed70bab0b3f1e0c8b528734812637387ce37e81cfcb2e7e48441959d1\"" May 13 00:22:58.265478 containerd[1446]: time="2025-05-13T00:22:58.265448688Z" level=info msg="StartContainer for \"1d66566ed70bab0b3f1e0c8b528734812637387ce37e81cfcb2e7e48441959d1\"" May 13 00:22:58.275514 systemd[1]: run-netns-cni\x2d01ad272c\x2d16b4\x2da6d6\x2db4d9\x2df37a639eae68.mount: Deactivated successfully. May 13 00:22:58.298837 systemd[1]: Started cri-containerd-1d66566ed70bab0b3f1e0c8b528734812637387ce37e81cfcb2e7e48441959d1.scope - libcontainer container 1d66566ed70bab0b3f1e0c8b528734812637387ce37e81cfcb2e7e48441959d1. May 13 00:22:58.321937 containerd[1446]: time="2025-05-13T00:22:58.321889034Z" level=info msg="StartContainer for \"1d66566ed70bab0b3f1e0c8b528734812637387ce37e81cfcb2e7e48441959d1\" returns successfully" May 13 00:22:58.515133 systemd-networkd[1376]: califd0c9bb2fd7: Gained IPv6LL May 13 00:22:58.969892 containerd[1446]: time="2025-05-13T00:22:58.967800347Z" level=info msg="StopPodSandbox for \"87e79ddd483e5624ff50ef59ae827cf47ccadb3749c204e75b7230f340a69d07\"" May 13 00:22:58.969892 containerd[1446]: time="2025-05-13T00:22:58.967847912Z" level=info msg="StopPodSandbox for \"bfbcc49e1a3b0f71bf0e010cb7c9fe3095889e7036e2e864e58763235f929478\"" May 13 00:22:59.072782 containerd[1446]: 2025-05-13 00:22:59.028 [INFO][4598] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="87e79ddd483e5624ff50ef59ae827cf47ccadb3749c204e75b7230f340a69d07" May 13 00:22:59.072782 containerd[1446]: 2025-05-13 00:22:59.028 [INFO][4598] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="87e79ddd483e5624ff50ef59ae827cf47ccadb3749c204e75b7230f340a69d07" iface="eth0" netns="/var/run/netns/cni-c6c51a6d-fbdb-6636-4cf2-934bdd71a274" May 13 00:22:59.072782 containerd[1446]: 2025-05-13 00:22:59.028 [INFO][4598] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="87e79ddd483e5624ff50ef59ae827cf47ccadb3749c204e75b7230f340a69d07" iface="eth0" netns="/var/run/netns/cni-c6c51a6d-fbdb-6636-4cf2-934bdd71a274" May 13 00:22:59.072782 containerd[1446]: 2025-05-13 00:22:59.028 [INFO][4598] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="87e79ddd483e5624ff50ef59ae827cf47ccadb3749c204e75b7230f340a69d07" iface="eth0" netns="/var/run/netns/cni-c6c51a6d-fbdb-6636-4cf2-934bdd71a274" May 13 00:22:59.072782 containerd[1446]: 2025-05-13 00:22:59.028 [INFO][4598] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="87e79ddd483e5624ff50ef59ae827cf47ccadb3749c204e75b7230f340a69d07" May 13 00:22:59.072782 containerd[1446]: 2025-05-13 00:22:59.028 [INFO][4598] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="87e79ddd483e5624ff50ef59ae827cf47ccadb3749c204e75b7230f340a69d07" May 13 00:22:59.072782 containerd[1446]: 2025-05-13 00:22:59.053 [INFO][4615] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="87e79ddd483e5624ff50ef59ae827cf47ccadb3749c204e75b7230f340a69d07" HandleID="k8s-pod-network.87e79ddd483e5624ff50ef59ae827cf47ccadb3749c204e75b7230f340a69d07" Workload="localhost-k8s-calico--apiserver--6894c4f4db--vq79h-eth0" May 13 00:22:59.072782 containerd[1446]: 2025-05-13 00:22:59.053 [INFO][4615] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:22:59.072782 containerd[1446]: 2025-05-13 00:22:59.054 [INFO][4615] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:22:59.072782 containerd[1446]: 2025-05-13 00:22:59.063 [WARNING][4615] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="87e79ddd483e5624ff50ef59ae827cf47ccadb3749c204e75b7230f340a69d07" HandleID="k8s-pod-network.87e79ddd483e5624ff50ef59ae827cf47ccadb3749c204e75b7230f340a69d07" Workload="localhost-k8s-calico--apiserver--6894c4f4db--vq79h-eth0" May 13 00:22:59.072782 containerd[1446]: 2025-05-13 00:22:59.063 [INFO][4615] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="87e79ddd483e5624ff50ef59ae827cf47ccadb3749c204e75b7230f340a69d07" HandleID="k8s-pod-network.87e79ddd483e5624ff50ef59ae827cf47ccadb3749c204e75b7230f340a69d07" Workload="localhost-k8s-calico--apiserver--6894c4f4db--vq79h-eth0" May 13 00:22:59.072782 containerd[1446]: 2025-05-13 00:22:59.065 [INFO][4615] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:22:59.072782 containerd[1446]: 2025-05-13 00:22:59.067 [INFO][4598] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="87e79ddd483e5624ff50ef59ae827cf47ccadb3749c204e75b7230f340a69d07" May 13 00:22:59.074928 containerd[1446]: time="2025-05-13T00:22:59.074881729Z" level=info msg="TearDown network for sandbox \"87e79ddd483e5624ff50ef59ae827cf47ccadb3749c204e75b7230f340a69d07\" successfully" May 13 00:22:59.074928 containerd[1446]: time="2025-05-13T00:22:59.074924614Z" level=info msg="StopPodSandbox for \"87e79ddd483e5624ff50ef59ae827cf47ccadb3749c204e75b7230f340a69d07\" returns successfully" May 13 00:22:59.075798 containerd[1446]: time="2025-05-13T00:22:59.075758836Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6894c4f4db-vq79h,Uid:d1cea400-e767-4428-8f90-26704f1d5213,Namespace:calico-apiserver,Attempt:1,}" May 13 00:22:59.077252 systemd[1]: run-netns-cni\x2dc6c51a6d\x2dfbdb\x2d6636\x2d4cf2\x2d934bdd71a274.mount: Deactivated successfully. May 13 00:22:59.098397 containerd[1446]: 2025-05-13 00:22:59.034 [INFO][4597] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="bfbcc49e1a3b0f71bf0e010cb7c9fe3095889e7036e2e864e58763235f929478" May 13 00:22:59.098397 containerd[1446]: 2025-05-13 00:22:59.034 [INFO][4597] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="bfbcc49e1a3b0f71bf0e010cb7c9fe3095889e7036e2e864e58763235f929478" iface="eth0" netns="/var/run/netns/cni-75d4f569-3ee5-f714-374c-00d344c1225c" May 13 00:22:59.098397 containerd[1446]: 2025-05-13 00:22:59.035 [INFO][4597] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="bfbcc49e1a3b0f71bf0e010cb7c9fe3095889e7036e2e864e58763235f929478" iface="eth0" netns="/var/run/netns/cni-75d4f569-3ee5-f714-374c-00d344c1225c" May 13 00:22:59.098397 containerd[1446]: 2025-05-13 00:22:59.035 [INFO][4597] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="bfbcc49e1a3b0f71bf0e010cb7c9fe3095889e7036e2e864e58763235f929478" iface="eth0" netns="/var/run/netns/cni-75d4f569-3ee5-f714-374c-00d344c1225c" May 13 00:22:59.098397 containerd[1446]: 2025-05-13 00:22:59.035 [INFO][4597] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="bfbcc49e1a3b0f71bf0e010cb7c9fe3095889e7036e2e864e58763235f929478" May 13 00:22:59.098397 containerd[1446]: 2025-05-13 00:22:59.035 [INFO][4597] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bfbcc49e1a3b0f71bf0e010cb7c9fe3095889e7036e2e864e58763235f929478" May 13 00:22:59.098397 containerd[1446]: 2025-05-13 00:22:59.069 [INFO][4621] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bfbcc49e1a3b0f71bf0e010cb7c9fe3095889e7036e2e864e58763235f929478" HandleID="k8s-pod-network.bfbcc49e1a3b0f71bf0e010cb7c9fe3095889e7036e2e864e58763235f929478" Workload="localhost-k8s-calico--kube--controllers--69855fccf--cpn7n-eth0" May 13 00:22:59.098397 containerd[1446]: 2025-05-13 00:22:59.069 [INFO][4621] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:22:59.098397 containerd[1446]: 2025-05-13 00:22:59.069 [INFO][4621] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:22:59.098397 containerd[1446]: 2025-05-13 00:22:59.081 [WARNING][4621] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bfbcc49e1a3b0f71bf0e010cb7c9fe3095889e7036e2e864e58763235f929478" HandleID="k8s-pod-network.bfbcc49e1a3b0f71bf0e010cb7c9fe3095889e7036e2e864e58763235f929478" Workload="localhost-k8s-calico--kube--controllers--69855fccf--cpn7n-eth0" May 13 00:22:59.098397 containerd[1446]: 2025-05-13 00:22:59.081 [INFO][4621] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bfbcc49e1a3b0f71bf0e010cb7c9fe3095889e7036e2e864e58763235f929478" HandleID="k8s-pod-network.bfbcc49e1a3b0f71bf0e010cb7c9fe3095889e7036e2e864e58763235f929478" Workload="localhost-k8s-calico--kube--controllers--69855fccf--cpn7n-eth0" May 13 00:22:59.098397 containerd[1446]: 2025-05-13 00:22:59.085 [INFO][4621] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:22:59.098397 containerd[1446]: 2025-05-13 00:22:59.091 [INFO][4597] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="bfbcc49e1a3b0f71bf0e010cb7c9fe3095889e7036e2e864e58763235f929478" May 13 00:22:59.099027 containerd[1446]: time="2025-05-13T00:22:59.098531498Z" level=info msg="TearDown network for sandbox \"bfbcc49e1a3b0f71bf0e010cb7c9fe3095889e7036e2e864e58763235f929478\" successfully" May 13 00:22:59.099027 containerd[1446]: time="2025-05-13T00:22:59.098599306Z" level=info msg="StopPodSandbox for \"bfbcc49e1a3b0f71bf0e010cb7c9fe3095889e7036e2e864e58763235f929478\" returns successfully" May 13 00:22:59.099409 containerd[1446]: time="2025-05-13T00:22:59.099382962Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-69855fccf-cpn7n,Uid:333d75f2-2d65-4963-bba9-b7a1ce798de8,Namespace:calico-system,Attempt:1,}" May 13 00:22:59.134949 kubelet[2539]: E0513 00:22:59.134906 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:22:59.135821 kubelet[2539]: E0513 00:22:59.134975 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:22:59.149324 kubelet[2539]: I0513 00:22:59.149271 2539 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-xcx4r" podStartSLOduration=30.149253693 podStartE2EDuration="30.149253693s" podCreationTimestamp="2025-05-13 00:22:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 00:22:59.149157761 +0000 UTC m=+44.261288540" watchObservedRunningTime="2025-05-13 00:22:59.149253693 +0000 UTC m=+44.261384472" May 13 00:22:59.154931 systemd-networkd[1376]: cali3e060f72f4b: Gained IPv6LL May 13 00:22:59.239254 systemd-networkd[1376]: cali1f9be9524c3: Link UP May 13 00:22:59.242700 systemd-networkd[1376]: cali1f9be9524c3: Gained carrier May 13 00:22:59.259277 containerd[1446]: 2025-05-13 00:22:59.126 [INFO][4631] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6894c4f4db--vq79h-eth0 calico-apiserver-6894c4f4db- calico-apiserver d1cea400-e767-4428-8f90-26704f1d5213 909 0 2025-05-13 00:22:34 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6894c4f4db projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6894c4f4db-vq79h eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali1f9be9524c3 [] []}} ContainerID="1aea659f2333cc3508c8bad46df7fabfb716877504940d38e9b3655eb0167acf" Namespace="calico-apiserver" Pod="calico-apiserver-6894c4f4db-vq79h" WorkloadEndpoint="localhost-k8s-calico--apiserver--6894c4f4db--vq79h-" May 13 00:22:59.259277 containerd[1446]: 2025-05-13 00:22:59.126 [INFO][4631] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="1aea659f2333cc3508c8bad46df7fabfb716877504940d38e9b3655eb0167acf" Namespace="calico-apiserver" Pod="calico-apiserver-6894c4f4db-vq79h" WorkloadEndpoint="localhost-k8s-calico--apiserver--6894c4f4db--vq79h-eth0" May 13 00:22:59.259277 containerd[1446]: 2025-05-13 00:22:59.173 [INFO][4658] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1aea659f2333cc3508c8bad46df7fabfb716877504940d38e9b3655eb0167acf" HandleID="k8s-pod-network.1aea659f2333cc3508c8bad46df7fabfb716877504940d38e9b3655eb0167acf" Workload="localhost-k8s-calico--apiserver--6894c4f4db--vq79h-eth0" May 13 00:22:59.259277 containerd[1446]: 2025-05-13 00:22:59.187 [INFO][4658] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1aea659f2333cc3508c8bad46df7fabfb716877504940d38e9b3655eb0167acf" HandleID="k8s-pod-network.1aea659f2333cc3508c8bad46df7fabfb716877504940d38e9b3655eb0167acf" Workload="localhost-k8s-calico--apiserver--6894c4f4db--vq79h-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40005394f0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6894c4f4db-vq79h", "timestamp":"2025-05-13 00:22:59.173756925 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 13 00:22:59.259277 containerd[1446]: 2025-05-13 00:22:59.187 [INFO][4658] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:22:59.259277 containerd[1446]: 2025-05-13 00:22:59.187 [INFO][4658] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:22:59.259277 containerd[1446]: 2025-05-13 00:22:59.187 [INFO][4658] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 13 00:22:59.259277 containerd[1446]: 2025-05-13 00:22:59.190 [INFO][4658] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.1aea659f2333cc3508c8bad46df7fabfb716877504940d38e9b3655eb0167acf" host="localhost" May 13 00:22:59.259277 containerd[1446]: 2025-05-13 00:22:59.199 [INFO][4658] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 13 00:22:59.259277 containerd[1446]: 2025-05-13 00:22:59.206 [INFO][4658] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 13 00:22:59.259277 containerd[1446]: 2025-05-13 00:22:59.210 [INFO][4658] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 13 00:22:59.259277 containerd[1446]: 2025-05-13 00:22:59.214 [INFO][4658] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 13 00:22:59.259277 containerd[1446]: 2025-05-13 00:22:59.214 [INFO][4658] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.1aea659f2333cc3508c8bad46df7fabfb716877504940d38e9b3655eb0167acf" host="localhost" May 13 00:22:59.259277 containerd[1446]: 2025-05-13 00:22:59.217 [INFO][4658] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.1aea659f2333cc3508c8bad46df7fabfb716877504940d38e9b3655eb0167acf May 13 00:22:59.259277 containerd[1446]: 2025-05-13 00:22:59.221 [INFO][4658] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.1aea659f2333cc3508c8bad46df7fabfb716877504940d38e9b3655eb0167acf" host="localhost" May 13 00:22:59.259277 containerd[1446]: 2025-05-13 00:22:59.227 [INFO][4658] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.1aea659f2333cc3508c8bad46df7fabfb716877504940d38e9b3655eb0167acf" host="localhost" May 13 00:22:59.259277 containerd[1446]: 2025-05-13 00:22:59.228 [INFO][4658] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.1aea659f2333cc3508c8bad46df7fabfb716877504940d38e9b3655eb0167acf" host="localhost" May 13 00:22:59.259277 containerd[1446]: 2025-05-13 00:22:59.228 [INFO][4658] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:22:59.259277 containerd[1446]: 2025-05-13 00:22:59.228 [INFO][4658] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="1aea659f2333cc3508c8bad46df7fabfb716877504940d38e9b3655eb0167acf" HandleID="k8s-pod-network.1aea659f2333cc3508c8bad46df7fabfb716877504940d38e9b3655eb0167acf" Workload="localhost-k8s-calico--apiserver--6894c4f4db--vq79h-eth0" May 13 00:22:59.260755 containerd[1446]: 2025-05-13 00:22:59.234 [INFO][4631] cni-plugin/k8s.go 386: Populated endpoint ContainerID="1aea659f2333cc3508c8bad46df7fabfb716877504940d38e9b3655eb0167acf" Namespace="calico-apiserver" Pod="calico-apiserver-6894c4f4db-vq79h" WorkloadEndpoint="localhost-k8s-calico--apiserver--6894c4f4db--vq79h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6894c4f4db--vq79h-eth0", GenerateName:"calico-apiserver-6894c4f4db-", Namespace:"calico-apiserver", SelfLink:"", UID:"d1cea400-e767-4428-8f90-26704f1d5213", ResourceVersion:"909", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 22, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6894c4f4db", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6894c4f4db-vq79h", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1f9be9524c3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:22:59.260755 containerd[1446]: 2025-05-13 00:22:59.234 [INFO][4631] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="1aea659f2333cc3508c8bad46df7fabfb716877504940d38e9b3655eb0167acf" Namespace="calico-apiserver" Pod="calico-apiserver-6894c4f4db-vq79h" WorkloadEndpoint="localhost-k8s-calico--apiserver--6894c4f4db--vq79h-eth0" May 13 00:22:59.260755 containerd[1446]: 2025-05-13 00:22:59.234 [INFO][4631] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1f9be9524c3 ContainerID="1aea659f2333cc3508c8bad46df7fabfb716877504940d38e9b3655eb0167acf" Namespace="calico-apiserver" Pod="calico-apiserver-6894c4f4db-vq79h" WorkloadEndpoint="localhost-k8s-calico--apiserver--6894c4f4db--vq79h-eth0" May 13 00:22:59.260755 containerd[1446]: 2025-05-13 00:22:59.242 [INFO][4631] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1aea659f2333cc3508c8bad46df7fabfb716877504940d38e9b3655eb0167acf" Namespace="calico-apiserver" Pod="calico-apiserver-6894c4f4db-vq79h" WorkloadEndpoint="localhost-k8s-calico--apiserver--6894c4f4db--vq79h-eth0" May 13 00:22:59.260755 containerd[1446]: 2025-05-13 00:22:59.244 [INFO][4631] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="1aea659f2333cc3508c8bad46df7fabfb716877504940d38e9b3655eb0167acf" Namespace="calico-apiserver" Pod="calico-apiserver-6894c4f4db-vq79h" WorkloadEndpoint="localhost-k8s-calico--apiserver--6894c4f4db--vq79h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6894c4f4db--vq79h-eth0", GenerateName:"calico-apiserver-6894c4f4db-", Namespace:"calico-apiserver", SelfLink:"", UID:"d1cea400-e767-4428-8f90-26704f1d5213", ResourceVersion:"909", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 22, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6894c4f4db", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1aea659f2333cc3508c8bad46df7fabfb716877504940d38e9b3655eb0167acf", Pod:"calico-apiserver-6894c4f4db-vq79h", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1f9be9524c3", MAC:"26:46:74:61:75:7e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:22:59.260755 containerd[1446]: 2025-05-13 00:22:59.255 [INFO][4631] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="1aea659f2333cc3508c8bad46df7fabfb716877504940d38e9b3655eb0167acf" Namespace="calico-apiserver" Pod="calico-apiserver-6894c4f4db-vq79h" WorkloadEndpoint="localhost-k8s-calico--apiserver--6894c4f4db--vq79h-eth0" May 13 00:22:59.277168 systemd[1]: run-netns-cni\x2d75d4f569\x2d3ee5\x2df714\x2d374c\x2d00d344c1225c.mount: Deactivated successfully. May 13 00:22:59.292019 systemd-networkd[1376]: calib6bde19d1d6: Link UP May 13 00:22:59.292240 systemd-networkd[1376]: calib6bde19d1d6: Gained carrier May 13 00:22:59.306597 containerd[1446]: 2025-05-13 00:22:59.186 [INFO][4646] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--69855fccf--cpn7n-eth0 calico-kube-controllers-69855fccf- calico-system 333d75f2-2d65-4963-bba9-b7a1ce798de8 910 0 2025-05-13 00:22:35 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:69855fccf projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-69855fccf-cpn7n eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calib6bde19d1d6 [] []}} ContainerID="be65b4f96d19c3d57aa4344533ec0b2567372b767f7ff3cf7d0df479262e233c" Namespace="calico-system" Pod="calico-kube-controllers-69855fccf-cpn7n" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--69855fccf--cpn7n-" May 13 00:22:59.306597 containerd[1446]: 2025-05-13 00:22:59.186 [INFO][4646] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="be65b4f96d19c3d57aa4344533ec0b2567372b767f7ff3cf7d0df479262e233c" Namespace="calico-system" Pod="calico-kube-controllers-69855fccf-cpn7n" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--69855fccf--cpn7n-eth0" May 13 00:22:59.306597 containerd[1446]: 2025-05-13 00:22:59.221 [INFO][4671] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="be65b4f96d19c3d57aa4344533ec0b2567372b767f7ff3cf7d0df479262e233c" HandleID="k8s-pod-network.be65b4f96d19c3d57aa4344533ec0b2567372b767f7ff3cf7d0df479262e233c" Workload="localhost-k8s-calico--kube--controllers--69855fccf--cpn7n-eth0" May 13 00:22:59.306597 containerd[1446]: 2025-05-13 00:22:59.233 [INFO][4671] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="be65b4f96d19c3d57aa4344533ec0b2567372b767f7ff3cf7d0df479262e233c" HandleID="k8s-pod-network.be65b4f96d19c3d57aa4344533ec0b2567372b767f7ff3cf7d0df479262e233c" Workload="localhost-k8s-calico--kube--controllers--69855fccf--cpn7n-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000293240), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-69855fccf-cpn7n", "timestamp":"2025-05-13 00:22:59.22169262 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 13 00:22:59.306597 containerd[1446]: 2025-05-13 00:22:59.234 [INFO][4671] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:22:59.306597 containerd[1446]: 2025-05-13 00:22:59.234 [INFO][4671] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:22:59.306597 containerd[1446]: 2025-05-13 00:22:59.234 [INFO][4671] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 13 00:22:59.306597 containerd[1446]: 2025-05-13 00:22:59.236 [INFO][4671] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.be65b4f96d19c3d57aa4344533ec0b2567372b767f7ff3cf7d0df479262e233c" host="localhost" May 13 00:22:59.306597 containerd[1446]: 2025-05-13 00:22:59.242 [INFO][4671] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 13 00:22:59.306597 containerd[1446]: 2025-05-13 00:22:59.256 [INFO][4671] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 13 00:22:59.306597 containerd[1446]: 2025-05-13 00:22:59.258 [INFO][4671] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 13 00:22:59.306597 containerd[1446]: 2025-05-13 00:22:59.261 [INFO][4671] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 13 00:22:59.306597 containerd[1446]: 2025-05-13 00:22:59.263 [INFO][4671] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.be65b4f96d19c3d57aa4344533ec0b2567372b767f7ff3cf7d0df479262e233c" host="localhost" May 13 00:22:59.306597 containerd[1446]: 2025-05-13 00:22:59.265 [INFO][4671] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.be65b4f96d19c3d57aa4344533ec0b2567372b767f7ff3cf7d0df479262e233c May 13 00:22:59.306597 containerd[1446]: 2025-05-13 00:22:59.270 [INFO][4671] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.be65b4f96d19c3d57aa4344533ec0b2567372b767f7ff3cf7d0df479262e233c" host="localhost" May 13 00:22:59.306597 containerd[1446]: 2025-05-13 00:22:59.284 [INFO][4671] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.be65b4f96d19c3d57aa4344533ec0b2567372b767f7ff3cf7d0df479262e233c" host="localhost" May 13 00:22:59.306597 containerd[1446]: 2025-05-13 00:22:59.284 [INFO][4671] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.be65b4f96d19c3d57aa4344533ec0b2567372b767f7ff3cf7d0df479262e233c" host="localhost" May 13 00:22:59.306597 containerd[1446]: 2025-05-13 00:22:59.284 [INFO][4671] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:22:59.306597 containerd[1446]: 2025-05-13 00:22:59.284 [INFO][4671] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="be65b4f96d19c3d57aa4344533ec0b2567372b767f7ff3cf7d0df479262e233c" HandleID="k8s-pod-network.be65b4f96d19c3d57aa4344533ec0b2567372b767f7ff3cf7d0df479262e233c" Workload="localhost-k8s-calico--kube--controllers--69855fccf--cpn7n-eth0" May 13 00:22:59.307230 containerd[1446]: 2025-05-13 00:22:59.287 [INFO][4646] cni-plugin/k8s.go 386: Populated endpoint ContainerID="be65b4f96d19c3d57aa4344533ec0b2567372b767f7ff3cf7d0df479262e233c" Namespace="calico-system" Pod="calico-kube-controllers-69855fccf-cpn7n" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--69855fccf--cpn7n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--69855fccf--cpn7n-eth0", GenerateName:"calico-kube-controllers-69855fccf-", Namespace:"calico-system", SelfLink:"", UID:"333d75f2-2d65-4963-bba9-b7a1ce798de8", ResourceVersion:"910", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 22, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"69855fccf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-69855fccf-cpn7n", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib6bde19d1d6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:22:59.307230 containerd[1446]: 2025-05-13 00:22:59.288 [INFO][4646] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="be65b4f96d19c3d57aa4344533ec0b2567372b767f7ff3cf7d0df479262e233c" Namespace="calico-system" Pod="calico-kube-controllers-69855fccf-cpn7n" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--69855fccf--cpn7n-eth0" May 13 00:22:59.307230 containerd[1446]: 2025-05-13 00:22:59.288 [INFO][4646] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib6bde19d1d6 ContainerID="be65b4f96d19c3d57aa4344533ec0b2567372b767f7ff3cf7d0df479262e233c" Namespace="calico-system" Pod="calico-kube-controllers-69855fccf-cpn7n" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--69855fccf--cpn7n-eth0" May 13 00:22:59.307230 containerd[1446]: 2025-05-13 00:22:59.292 [INFO][4646] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="be65b4f96d19c3d57aa4344533ec0b2567372b767f7ff3cf7d0df479262e233c" Namespace="calico-system" Pod="calico-kube-controllers-69855fccf-cpn7n" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--69855fccf--cpn7n-eth0" May 13 00:22:59.307230 containerd[1446]: 2025-05-13 00:22:59.293 [INFO][4646] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="be65b4f96d19c3d57aa4344533ec0b2567372b767f7ff3cf7d0df479262e233c" Namespace="calico-system" Pod="calico-kube-controllers-69855fccf-cpn7n" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--69855fccf--cpn7n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--69855fccf--cpn7n-eth0", GenerateName:"calico-kube-controllers-69855fccf-", Namespace:"calico-system", SelfLink:"", UID:"333d75f2-2d65-4963-bba9-b7a1ce798de8", ResourceVersion:"910", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 22, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"69855fccf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"be65b4f96d19c3d57aa4344533ec0b2567372b767f7ff3cf7d0df479262e233c", Pod:"calico-kube-controllers-69855fccf-cpn7n", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib6bde19d1d6", MAC:"fa:96:34:8b:2c:03", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:22:59.307230 containerd[1446]: 2025-05-13 00:22:59.302 [INFO][4646] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="be65b4f96d19c3d57aa4344533ec0b2567372b767f7ff3cf7d0df479262e233c" Namespace="calico-system" Pod="calico-kube-controllers-69855fccf-cpn7n" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--69855fccf--cpn7n-eth0" May 13 00:22:59.338867 containerd[1446]: time="2025-05-13T00:22:59.338441759Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:22:59.338867 containerd[1446]: time="2025-05-13T00:22:59.338532050Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:22:59.338867 containerd[1446]: time="2025-05-13T00:22:59.338546932Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:22:59.338867 containerd[1446]: time="2025-05-13T00:22:59.338642024Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:22:59.354950 containerd[1446]: time="2025-05-13T00:22:59.354817319Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:22:59.354950 containerd[1446]: time="2025-05-13T00:22:59.354891769Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:22:59.354950 containerd[1446]: time="2025-05-13T00:22:59.354903810Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:22:59.355132 containerd[1446]: time="2025-05-13T00:22:59.354993381Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:22:59.369353 containerd[1446]: time="2025-05-13T00:22:59.369313650Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:22:59.370485 containerd[1446]: time="2025-05-13T00:22:59.370448749Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=40247603" May 13 00:22:59.371428 containerd[1446]: time="2025-05-13T00:22:59.371361740Z" level=info msg="ImageCreate event name:\"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:22:59.374825 containerd[1446]: time="2025-05-13T00:22:59.374790479Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:22:59.375857 containerd[1446]: time="2025-05-13T00:22:59.375825285Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"41616801\" in 1.836714451s" May 13 00:22:59.375954 containerd[1446]: time="2025-05-13T00:22:59.375938659Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\"" May 13 00:22:59.378370 containerd[1446]: time="2025-05-13T00:22:59.378335472Z" level=info msg="CreateContainer within sandbox \"d00ded3b14fbd10dfbcb1340c6cc22d0415f507546be047139fec26dabcc70a2\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 13 00:22:59.384832 systemd[1]: Started cri-containerd-1aea659f2333cc3508c8bad46df7fabfb716877504940d38e9b3655eb0167acf.scope - libcontainer container 1aea659f2333cc3508c8bad46df7fabfb716877504940d38e9b3655eb0167acf. May 13 00:22:59.386404 systemd[1]: Started cri-containerd-be65b4f96d19c3d57aa4344533ec0b2567372b767f7ff3cf7d0df479262e233c.scope - libcontainer container be65b4f96d19c3d57aa4344533ec0b2567372b767f7ff3cf7d0df479262e233c. May 13 00:22:59.399085 containerd[1446]: time="2025-05-13T00:22:59.399029399Z" level=info msg="CreateContainer within sandbox \"d00ded3b14fbd10dfbcb1340c6cc22d0415f507546be047139fec26dabcc70a2\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"3bff30658e51a8b51bb61ac7528fa3e4cbe2f88bd40a43aa28b0f215ba282d39\"" May 13 00:22:59.399354 systemd-resolved[1310]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 13 00:22:59.400096 containerd[1446]: time="2025-05-13T00:22:59.400060765Z" level=info msg="StartContainer for \"3bff30658e51a8b51bb61ac7528fa3e4cbe2f88bd40a43aa28b0f215ba282d39\"" May 13 00:22:59.403646 systemd-resolved[1310]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 13 00:22:59.423397 containerd[1446]: time="2025-05-13T00:22:59.423328407Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6894c4f4db-vq79h,Uid:d1cea400-e767-4428-8f90-26704f1d5213,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"1aea659f2333cc3508c8bad46df7fabfb716877504940d38e9b3655eb0167acf\"" May 13 00:22:59.430586 containerd[1446]: time="2025-05-13T00:22:59.430538928Z" level=info msg="CreateContainer within sandbox \"1aea659f2333cc3508c8bad46df7fabfb716877504940d38e9b3655eb0167acf\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 13 00:22:59.431867 systemd[1]: Started cri-containerd-3bff30658e51a8b51bb61ac7528fa3e4cbe2f88bd40a43aa28b0f215ba282d39.scope - libcontainer container 3bff30658e51a8b51bb61ac7528fa3e4cbe2f88bd40a43aa28b0f215ba282d39. May 13 00:22:59.436844 containerd[1446]: time="2025-05-13T00:22:59.436803173Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-69855fccf-cpn7n,Uid:333d75f2-2d65-4963-bba9-b7a1ce798de8,Namespace:calico-system,Attempt:1,} returns sandbox id \"be65b4f96d19c3d57aa4344533ec0b2567372b767f7ff3cf7d0df479262e233c\"" May 13 00:22:59.438423 containerd[1446]: time="2025-05-13T00:22:59.438400448Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\"" May 13 00:22:59.448886 containerd[1446]: time="2025-05-13T00:22:59.448499201Z" level=info msg="CreateContainer within sandbox \"1aea659f2333cc3508c8bad46df7fabfb716877504940d38e9b3655eb0167acf\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"56e56f054685c4dee60708ed1c0e8a13724bd54df8ab4a1e1f9b5cd696fee801\"" May 13 00:22:59.449880 containerd[1446]: time="2025-05-13T00:22:59.449841685Z" level=info msg="StartContainer for \"56e56f054685c4dee60708ed1c0e8a13724bd54df8ab4a1e1f9b5cd696fee801\"" May 13 00:22:59.478910 systemd[1]: Started cri-containerd-56e56f054685c4dee60708ed1c0e8a13724bd54df8ab4a1e1f9b5cd696fee801.scope - libcontainer container 56e56f054685c4dee60708ed1c0e8a13724bd54df8ab4a1e1f9b5cd696fee801. May 13 00:22:59.485049 containerd[1446]: time="2025-05-13T00:22:59.484990578Z" level=info msg="StartContainer for \"3bff30658e51a8b51bb61ac7528fa3e4cbe2f88bd40a43aa28b0f215ba282d39\" returns successfully" May 13 00:22:59.529845 containerd[1446]: time="2025-05-13T00:22:59.529705280Z" level=info msg="StartContainer for \"56e56f054685c4dee60708ed1c0e8a13724bd54df8ab4a1e1f9b5cd696fee801\" returns successfully" May 13 00:22:59.666833 systemd-networkd[1376]: calic1f51f62b57: Gained IPv6LL May 13 00:22:59.967736 containerd[1446]: time="2025-05-13T00:22:59.967465826Z" level=info msg="StopPodSandbox for \"eab8d741f67b79a2cd6380a374fe623436c2df296fd0ff2ad35066305fd06906\"" May 13 00:23:00.146230 kubelet[2539]: E0513 00:23:00.146181 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:23:00.148367 kubelet[2539]: E0513 00:23:00.148336 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:23:00.159300 containerd[1446]: 2025-05-13 00:23:00.078 [INFO][4894] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="eab8d741f67b79a2cd6380a374fe623436c2df296fd0ff2ad35066305fd06906" May 13 00:23:00.159300 containerd[1446]: 2025-05-13 00:23:00.078 [INFO][4894] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="eab8d741f67b79a2cd6380a374fe623436c2df296fd0ff2ad35066305fd06906" iface="eth0" netns="/var/run/netns/cni-03cff433-069b-8d9c-5ad0-d0b78350a884" May 13 00:23:00.159300 containerd[1446]: 2025-05-13 00:23:00.078 [INFO][4894] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="eab8d741f67b79a2cd6380a374fe623436c2df296fd0ff2ad35066305fd06906" iface="eth0" netns="/var/run/netns/cni-03cff433-069b-8d9c-5ad0-d0b78350a884" May 13 00:23:00.159300 containerd[1446]: 2025-05-13 00:23:00.078 [INFO][4894] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="eab8d741f67b79a2cd6380a374fe623436c2df296fd0ff2ad35066305fd06906" iface="eth0" netns="/var/run/netns/cni-03cff433-069b-8d9c-5ad0-d0b78350a884" May 13 00:23:00.159300 containerd[1446]: 2025-05-13 00:23:00.079 [INFO][4894] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="eab8d741f67b79a2cd6380a374fe623436c2df296fd0ff2ad35066305fd06906" May 13 00:23:00.159300 containerd[1446]: 2025-05-13 00:23:00.079 [INFO][4894] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="eab8d741f67b79a2cd6380a374fe623436c2df296fd0ff2ad35066305fd06906" May 13 00:23:00.159300 containerd[1446]: 2025-05-13 00:23:00.133 [INFO][4903] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="eab8d741f67b79a2cd6380a374fe623436c2df296fd0ff2ad35066305fd06906" HandleID="k8s-pod-network.eab8d741f67b79a2cd6380a374fe623436c2df296fd0ff2ad35066305fd06906" Workload="localhost-k8s-csi--node--driver--qpdwm-eth0" May 13 00:23:00.159300 containerd[1446]: 2025-05-13 00:23:00.134 [INFO][4903] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:23:00.159300 containerd[1446]: 2025-05-13 00:23:00.134 [INFO][4903] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:23:00.159300 containerd[1446]: 2025-05-13 00:23:00.148 [WARNING][4903] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="eab8d741f67b79a2cd6380a374fe623436c2df296fd0ff2ad35066305fd06906" HandleID="k8s-pod-network.eab8d741f67b79a2cd6380a374fe623436c2df296fd0ff2ad35066305fd06906" Workload="localhost-k8s-csi--node--driver--qpdwm-eth0" May 13 00:23:00.159300 containerd[1446]: 2025-05-13 00:23:00.148 [INFO][4903] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="eab8d741f67b79a2cd6380a374fe623436c2df296fd0ff2ad35066305fd06906" HandleID="k8s-pod-network.eab8d741f67b79a2cd6380a374fe623436c2df296fd0ff2ad35066305fd06906" Workload="localhost-k8s-csi--node--driver--qpdwm-eth0" May 13 00:23:00.159300 containerd[1446]: 2025-05-13 00:23:00.151 [INFO][4903] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:23:00.159300 containerd[1446]: 2025-05-13 00:23:00.156 [INFO][4894] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="eab8d741f67b79a2cd6380a374fe623436c2df296fd0ff2ad35066305fd06906" May 13 00:23:00.161189 containerd[1446]: time="2025-05-13T00:23:00.160727293Z" level=info msg="TearDown network for sandbox \"eab8d741f67b79a2cd6380a374fe623436c2df296fd0ff2ad35066305fd06906\" successfully" May 13 00:23:00.161189 containerd[1446]: time="2025-05-13T00:23:00.160762657Z" level=info msg="StopPodSandbox for \"eab8d741f67b79a2cd6380a374fe623436c2df296fd0ff2ad35066305fd06906\" returns successfully" May 13 00:23:00.164226 containerd[1446]: time="2025-05-13T00:23:00.163528587Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qpdwm,Uid:8de2152a-6cd4-4599-a610-aac788d746cf,Namespace:calico-system,Attempt:1,}" May 13 00:23:00.175252 kubelet[2539]: I0513 00:23:00.175193 2539 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6894c4f4db-vq79h" podStartSLOduration=26.175176456 podStartE2EDuration="26.175176456s" podCreationTimestamp="2025-05-13 00:22:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 00:23:00.161615479 +0000 UTC m=+45.273746258" watchObservedRunningTime="2025-05-13 00:23:00.175176456 +0000 UTC m=+45.287307235" May 13 00:23:00.176842 kubelet[2539]: I0513 00:23:00.175529 2539 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6894c4f4db-g2hb4" podStartSLOduration=23.337238731 podStartE2EDuration="25.175521777s" podCreationTimestamp="2025-05-13 00:22:35 +0000 UTC" firstStartedPulling="2025-05-13 00:22:57.538707142 +0000 UTC m=+42.650837881" lastFinishedPulling="2025-05-13 00:22:59.376990148 +0000 UTC m=+44.489120927" observedRunningTime="2025-05-13 00:23:00.174972192 +0000 UTC m=+45.287102971" watchObservedRunningTime="2025-05-13 00:23:00.175521777 +0000 UTC m=+45.287652596" May 13 00:23:00.279152 systemd[1]: run-netns-cni\x2d03cff433\x2d069b\x2d8d9c\x2d5ad0\x2dd0b78350a884.mount: Deactivated successfully. May 13 00:23:00.317582 systemd-networkd[1376]: calidbd621ba582: Link UP May 13 00:23:00.317862 systemd-networkd[1376]: calidbd621ba582: Gained carrier May 13 00:23:00.338098 containerd[1446]: 2025-05-13 00:23:00.225 [INFO][4914] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--qpdwm-eth0 csi-node-driver- calico-system 8de2152a-6cd4-4599-a610-aac788d746cf 931 0 2025-05-13 00:22:35 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:55b7b4b9d k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-qpdwm eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calidbd621ba582 [] []}} ContainerID="14f374d1475e5f44f9fabd68d506b71e1becf97f5300953ac7a1e1747067741c" Namespace="calico-system" Pod="csi-node-driver-qpdwm" WorkloadEndpoint="localhost-k8s-csi--node--driver--qpdwm-" May 13 00:23:00.338098 containerd[1446]: 2025-05-13 00:23:00.225 [INFO][4914] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="14f374d1475e5f44f9fabd68d506b71e1becf97f5300953ac7a1e1747067741c" Namespace="calico-system" Pod="csi-node-driver-qpdwm" WorkloadEndpoint="localhost-k8s-csi--node--driver--qpdwm-eth0" May 13 00:23:00.338098 containerd[1446]: 2025-05-13 00:23:00.257 [INFO][4927] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="14f374d1475e5f44f9fabd68d506b71e1becf97f5300953ac7a1e1747067741c" HandleID="k8s-pod-network.14f374d1475e5f44f9fabd68d506b71e1becf97f5300953ac7a1e1747067741c" Workload="localhost-k8s-csi--node--driver--qpdwm-eth0" May 13 00:23:00.338098 containerd[1446]: 2025-05-13 00:23:00.270 [INFO][4927] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="14f374d1475e5f44f9fabd68d506b71e1becf97f5300953ac7a1e1747067741c" HandleID="k8s-pod-network.14f374d1475e5f44f9fabd68d506b71e1becf97f5300953ac7a1e1747067741c" Workload="localhost-k8s-csi--node--driver--qpdwm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000278eb0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-qpdwm", "timestamp":"2025-05-13 00:23:00.257735502 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 13 00:23:00.338098 containerd[1446]: 2025-05-13 00:23:00.270 [INFO][4927] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:23:00.338098 containerd[1446]: 2025-05-13 00:23:00.270 [INFO][4927] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:23:00.338098 containerd[1446]: 2025-05-13 00:23:00.270 [INFO][4927] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 13 00:23:00.338098 containerd[1446]: 2025-05-13 00:23:00.275 [INFO][4927] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.14f374d1475e5f44f9fabd68d506b71e1becf97f5300953ac7a1e1747067741c" host="localhost" May 13 00:23:00.338098 containerd[1446]: 2025-05-13 00:23:00.286 [INFO][4927] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 13 00:23:00.338098 containerd[1446]: 2025-05-13 00:23:00.291 [INFO][4927] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 13 00:23:00.338098 containerd[1446]: 2025-05-13 00:23:00.294 [INFO][4927] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 13 00:23:00.338098 containerd[1446]: 2025-05-13 00:23:00.296 [INFO][4927] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 13 00:23:00.338098 containerd[1446]: 2025-05-13 00:23:00.296 [INFO][4927] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.14f374d1475e5f44f9fabd68d506b71e1becf97f5300953ac7a1e1747067741c" host="localhost" May 13 00:23:00.338098 containerd[1446]: 2025-05-13 00:23:00.298 [INFO][4927] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.14f374d1475e5f44f9fabd68d506b71e1becf97f5300953ac7a1e1747067741c May 13 00:23:00.338098 containerd[1446]: 2025-05-13 00:23:00.302 [INFO][4927] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.14f374d1475e5f44f9fabd68d506b71e1becf97f5300953ac7a1e1747067741c" host="localhost" May 13 00:23:00.338098 containerd[1446]: 2025-05-13 00:23:00.308 [INFO][4927] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.14f374d1475e5f44f9fabd68d506b71e1becf97f5300953ac7a1e1747067741c" host="localhost" May 13 00:23:00.338098 containerd[1446]: 2025-05-13 00:23:00.308 [INFO][4927] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.14f374d1475e5f44f9fabd68d506b71e1becf97f5300953ac7a1e1747067741c" host="localhost" May 13 00:23:00.338098 containerd[1446]: 2025-05-13 00:23:00.308 [INFO][4927] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:23:00.338098 containerd[1446]: 2025-05-13 00:23:00.308 [INFO][4927] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="14f374d1475e5f44f9fabd68d506b71e1becf97f5300953ac7a1e1747067741c" HandleID="k8s-pod-network.14f374d1475e5f44f9fabd68d506b71e1becf97f5300953ac7a1e1747067741c" Workload="localhost-k8s-csi--node--driver--qpdwm-eth0" May 13 00:23:00.339725 containerd[1446]: 2025-05-13 00:23:00.315 [INFO][4914] cni-plugin/k8s.go 386: Populated endpoint ContainerID="14f374d1475e5f44f9fabd68d506b71e1becf97f5300953ac7a1e1747067741c" Namespace="calico-system" Pod="csi-node-driver-qpdwm" WorkloadEndpoint="localhost-k8s-csi--node--driver--qpdwm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--qpdwm-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8de2152a-6cd4-4599-a610-aac788d746cf", ResourceVersion:"931", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 22, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-qpdwm", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calidbd621ba582", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:23:00.339725 containerd[1446]: 2025-05-13 00:23:00.315 [INFO][4914] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.134/32] ContainerID="14f374d1475e5f44f9fabd68d506b71e1becf97f5300953ac7a1e1747067741c" Namespace="calico-system" Pod="csi-node-driver-qpdwm" WorkloadEndpoint="localhost-k8s-csi--node--driver--qpdwm-eth0" May 13 00:23:00.339725 containerd[1446]: 2025-05-13 00:23:00.315 [INFO][4914] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidbd621ba582 ContainerID="14f374d1475e5f44f9fabd68d506b71e1becf97f5300953ac7a1e1747067741c" Namespace="calico-system" Pod="csi-node-driver-qpdwm" WorkloadEndpoint="localhost-k8s-csi--node--driver--qpdwm-eth0" May 13 00:23:00.339725 containerd[1446]: 2025-05-13 00:23:00.317 [INFO][4914] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="14f374d1475e5f44f9fabd68d506b71e1becf97f5300953ac7a1e1747067741c" Namespace="calico-system" Pod="csi-node-driver-qpdwm" WorkloadEndpoint="localhost-k8s-csi--node--driver--qpdwm-eth0" May 13 00:23:00.339725 containerd[1446]: 2025-05-13 00:23:00.317 [INFO][4914] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="14f374d1475e5f44f9fabd68d506b71e1becf97f5300953ac7a1e1747067741c" Namespace="calico-system" Pod="csi-node-driver-qpdwm" WorkloadEndpoint="localhost-k8s-csi--node--driver--qpdwm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--qpdwm-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8de2152a-6cd4-4599-a610-aac788d746cf", ResourceVersion:"931", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 22, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"14f374d1475e5f44f9fabd68d506b71e1becf97f5300953ac7a1e1747067741c", Pod:"csi-node-driver-qpdwm", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calidbd621ba582", MAC:"ae:03:d0:42:a8:ba", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:23:00.339725 containerd[1446]: 2025-05-13 00:23:00.331 [INFO][4914] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="14f374d1475e5f44f9fabd68d506b71e1becf97f5300953ac7a1e1747067741c" Namespace="calico-system" Pod="csi-node-driver-qpdwm" WorkloadEndpoint="localhost-k8s-csi--node--driver--qpdwm-eth0" May 13 00:23:00.356804 containerd[1446]: time="2025-05-13T00:23:00.356615734Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:23:00.356945 containerd[1446]: time="2025-05-13T00:23:00.356856123Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:23:00.356945 containerd[1446]: time="2025-05-13T00:23:00.356888167Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:23:00.357034 containerd[1446]: time="2025-05-13T00:23:00.356992779Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:23:00.393833 systemd[1]: Started cri-containerd-14f374d1475e5f44f9fabd68d506b71e1becf97f5300953ac7a1e1747067741c.scope - libcontainer container 14f374d1475e5f44f9fabd68d506b71e1becf97f5300953ac7a1e1747067741c. May 13 00:23:00.404448 systemd-resolved[1310]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 13 00:23:00.417967 containerd[1446]: time="2025-05-13T00:23:00.417927686Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qpdwm,Uid:8de2152a-6cd4-4599-a610-aac788d746cf,Namespace:calico-system,Attempt:1,} returns sandbox id \"14f374d1475e5f44f9fabd68d506b71e1becf97f5300953ac7a1e1747067741c\"" May 13 00:23:00.563290 systemd-networkd[1376]: cali1f9be9524c3: Gained IPv6LL May 13 00:23:01.153432 kubelet[2539]: I0513 00:23:01.153395 2539 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 13 00:23:01.154504 kubelet[2539]: E0513 00:23:01.154292 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:23:01.323379 containerd[1446]: time="2025-05-13T00:23:01.323324715Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:23:01.329784 containerd[1446]: time="2025-05-13T00:23:01.329728901Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.3: active requests=0, bytes read=32554116" May 13 00:23:01.330923 containerd[1446]: time="2025-05-13T00:23:01.330402740Z" level=info msg="ImageCreate event name:\"sha256:ec7c64189a2fd01b24b044fea1840d441e9884a0df32c2e9d6982cfbbea1f814\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:23:01.330992 systemd-networkd[1376]: calib6bde19d1d6: Gained IPv6LL May 13 00:23:01.338325 containerd[1446]: time="2025-05-13T00:23:01.338271097Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:23:01.338955 containerd[1446]: time="2025-05-13T00:23:01.338914612Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" with image id \"sha256:ec7c64189a2fd01b24b044fea1840d441e9884a0df32c2e9d6982cfbbea1f814\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\", size \"33923266\" in 1.90048116s" May 13 00:23:01.339025 containerd[1446]: time="2025-05-13T00:23:01.338984660Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" returns image reference \"sha256:ec7c64189a2fd01b24b044fea1840d441e9884a0df32c2e9d6982cfbbea1f814\"" May 13 00:23:01.341140 containerd[1446]: time="2025-05-13T00:23:01.341061142Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\"" May 13 00:23:01.348914 containerd[1446]: time="2025-05-13T00:23:01.348878374Z" level=info msg="CreateContainer within sandbox \"be65b4f96d19c3d57aa4344533ec0b2567372b767f7ff3cf7d0df479262e233c\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" May 13 00:23:01.425758 containerd[1446]: time="2025-05-13T00:23:01.425591155Z" level=info msg="CreateContainer within sandbox \"be65b4f96d19c3d57aa4344533ec0b2567372b767f7ff3cf7d0df479262e233c\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"710a8f2481c46ba30df2e803eae2018a632daaae5075f1d50d75bedf17555e47\"" May 13 00:23:01.428004 containerd[1446]: time="2025-05-13T00:23:01.426850262Z" level=info msg="StartContainer for \"710a8f2481c46ba30df2e803eae2018a632daaae5075f1d50d75bedf17555e47\"" May 13 00:23:01.465861 systemd[1]: Started cri-containerd-710a8f2481c46ba30df2e803eae2018a632daaae5075f1d50d75bedf17555e47.scope - libcontainer container 710a8f2481c46ba30df2e803eae2018a632daaae5075f1d50d75bedf17555e47. May 13 00:23:01.497703 containerd[1446]: time="2025-05-13T00:23:01.497639953Z" level=info msg="StartContainer for \"710a8f2481c46ba30df2e803eae2018a632daaae5075f1d50d75bedf17555e47\" returns successfully" May 13 00:23:01.714908 systemd-networkd[1376]: calidbd621ba582: Gained IPv6LL May 13 00:23:01.912380 systemd[1]: Started sshd@12-10.0.0.71:22-10.0.0.1:33382.service - OpenSSH per-connection server daemon (10.0.0.1:33382). May 13 00:23:01.965411 sshd[5044]: Accepted publickey for core from 10.0.0.1 port 33382 ssh2: RSA SHA256:ilwLBGyeejLKSU0doRti0j2W4iQ88Tp+35jhkd0iwiU May 13 00:23:01.969590 sshd[5044]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:23:01.973676 systemd-logind[1422]: New session 13 of user core. May 13 00:23:01.982838 systemd[1]: Started session-13.scope - Session 13 of User core. May 13 00:23:02.158346 kubelet[2539]: E0513 00:23:02.158315 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:23:02.173726 kubelet[2539]: I0513 00:23:02.172187 2539 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-69855fccf-cpn7n" podStartSLOduration=25.269813923 podStartE2EDuration="27.172170343s" podCreationTimestamp="2025-05-13 00:22:35 +0000 UTC" firstStartedPulling="2025-05-13 00:22:59.438192183 +0000 UTC m=+44.550322962" lastFinishedPulling="2025-05-13 00:23:01.340548563 +0000 UTC m=+46.452679382" observedRunningTime="2025-05-13 00:23:02.171800621 +0000 UTC m=+47.283931400" watchObservedRunningTime="2025-05-13 00:23:02.172170343 +0000 UTC m=+47.284301122" May 13 00:23:02.198914 sshd[5044]: pam_unix(sshd:session): session closed for user core May 13 00:23:02.206540 systemd[1]: sshd@12-10.0.0.71:22-10.0.0.1:33382.service: Deactivated successfully. May 13 00:23:02.208990 systemd[1]: session-13.scope: Deactivated successfully. May 13 00:23:02.209966 systemd-logind[1422]: Session 13 logged out. Waiting for processes to exit. May 13 00:23:02.217952 systemd[1]: Started sshd@13-10.0.0.71:22-10.0.0.1:33398.service - OpenSSH per-connection server daemon (10.0.0.1:33398). May 13 00:23:02.220466 systemd-logind[1422]: Removed session 13. May 13 00:23:02.256802 sshd[5059]: Accepted publickey for core from 10.0.0.1 port 33398 ssh2: RSA SHA256:ilwLBGyeejLKSU0doRti0j2W4iQ88Tp+35jhkd0iwiU May 13 00:23:02.258151 sshd[5059]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:23:02.262074 systemd-logind[1422]: New session 14 of user core. May 13 00:23:02.268811 systemd[1]: Started session-14.scope - Session 14 of User core. May 13 00:23:02.509329 sshd[5059]: pam_unix(sshd:session): session closed for user core May 13 00:23:02.518565 systemd[1]: sshd@13-10.0.0.71:22-10.0.0.1:33398.service: Deactivated successfully. May 13 00:23:02.521794 containerd[1446]: time="2025-05-13T00:23:02.521733925Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.3: active requests=0, bytes read=7474935" May 13 00:23:02.522099 containerd[1446]: time="2025-05-13T00:23:02.521777369Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:23:02.523502 containerd[1446]: time="2025-05-13T00:23:02.523459961Z" level=info msg="ImageCreate event name:\"sha256:15faf29e8b518d846c91c15785ff89e783d356ea0f2b22826f47a556ea32645b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:23:02.523640 systemd[1]: session-14.scope: Deactivated successfully. May 13 00:23:02.524779 containerd[1446]: time="2025-05-13T00:23:02.524739947Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:23:02.525431 containerd[1446]: time="2025-05-13T00:23:02.525397542Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.3\" with image id \"sha256:15faf29e8b518d846c91c15785ff89e783d356ea0f2b22826f47a556ea32645b\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\", size \"8844117\" in 1.18425575s" May 13 00:23:02.525480 containerd[1446]: time="2025-05-13T00:23:02.525445628Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\" returns image reference \"sha256:15faf29e8b518d846c91c15785ff89e783d356ea0f2b22826f47a556ea32645b\"" May 13 00:23:02.526699 systemd-logind[1422]: Session 14 logged out. Waiting for processes to exit. May 13 00:23:02.528675 containerd[1446]: time="2025-05-13T00:23:02.528444970Z" level=info msg="CreateContainer within sandbox \"14f374d1475e5f44f9fabd68d506b71e1becf97f5300953ac7a1e1747067741c\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" May 13 00:23:02.538939 systemd[1]: Started sshd@14-10.0.0.71:22-10.0.0.1:40788.service - OpenSSH per-connection server daemon (10.0.0.1:40788). May 13 00:23:02.540937 systemd-logind[1422]: Removed session 14. May 13 00:23:02.549258 containerd[1446]: time="2025-05-13T00:23:02.547408012Z" level=info msg="CreateContainer within sandbox \"14f374d1475e5f44f9fabd68d506b71e1becf97f5300953ac7a1e1747067741c\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"54e70fc0b86edaf170d5909af10b94c7d464ef98cc1fa9caf6a3b8e5fc74ae1c\"" May 13 00:23:02.549258 containerd[1446]: time="2025-05-13T00:23:02.548211264Z" level=info msg="StartContainer for \"54e70fc0b86edaf170d5909af10b94c7d464ef98cc1fa9caf6a3b8e5fc74ae1c\"" May 13 00:23:02.592854 systemd[1]: Started cri-containerd-54e70fc0b86edaf170d5909af10b94c7d464ef98cc1fa9caf6a3b8e5fc74ae1c.scope - libcontainer container 54e70fc0b86edaf170d5909af10b94c7d464ef98cc1fa9caf6a3b8e5fc74ae1c. May 13 00:23:02.602212 sshd[5077]: Accepted publickey for core from 10.0.0.1 port 40788 ssh2: RSA SHA256:ilwLBGyeejLKSU0doRti0j2W4iQ88Tp+35jhkd0iwiU May 13 00:23:02.604973 sshd[5077]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:23:02.609963 systemd-logind[1422]: New session 15 of user core. May 13 00:23:02.617869 systemd[1]: Started session-15.scope - Session 15 of User core. May 13 00:23:02.645053 containerd[1446]: time="2025-05-13T00:23:02.645012942Z" level=info msg="StartContainer for \"54e70fc0b86edaf170d5909af10b94c7d464ef98cc1fa9caf6a3b8e5fc74ae1c\" returns successfully" May 13 00:23:02.646644 containerd[1446]: time="2025-05-13T00:23:02.646619525Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\"" May 13 00:23:03.160753 kubelet[2539]: I0513 00:23:03.160718 2539 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 13 00:23:04.164221 containerd[1446]: time="2025-05-13T00:23:04.164139648Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:23:04.167210 containerd[1446]: time="2025-05-13T00:23:04.166581155Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3: active requests=0, bytes read=13124299" May 13 00:23:04.167641 containerd[1446]: time="2025-05-13T00:23:04.167535940Z" level=info msg="ImageCreate event name:\"sha256:a91b1f00752edc175f270a01b33683fa80818734aa2274388785eaf3364315dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:23:04.172577 containerd[1446]: time="2025-05-13T00:23:04.172511804Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:23:04.176981 containerd[1446]: time="2025-05-13T00:23:04.176735066Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" with image id \"sha256:a91b1f00752edc175f270a01b33683fa80818734aa2274388785eaf3364315dc\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\", size \"14493433\" in 1.530080297s" May 13 00:23:04.176981 containerd[1446]: time="2025-05-13T00:23:04.176776031Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" returns image reference \"sha256:a91b1f00752edc175f270a01b33683fa80818734aa2274388785eaf3364315dc\"" May 13 00:23:04.182348 containerd[1446]: time="2025-05-13T00:23:04.182201905Z" level=info msg="CreateContainer within sandbox \"14f374d1475e5f44f9fabd68d506b71e1becf97f5300953ac7a1e1747067741c\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" May 13 00:23:04.225548 sshd[5077]: pam_unix(sshd:session): session closed for user core May 13 00:23:04.232882 containerd[1446]: time="2025-05-13T00:23:04.232843807Z" level=info msg="CreateContainer within sandbox \"14f374d1475e5f44f9fabd68d506b71e1becf97f5300953ac7a1e1747067741c\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"42f40231a7b97f3cd5b980accb60a61760fafb80fd42c402dd1dc99cd0732ecf\"" May 13 00:23:04.233995 containerd[1446]: time="2025-05-13T00:23:04.233432751Z" level=info msg="StartContainer for \"42f40231a7b97f3cd5b980accb60a61760fafb80fd42c402dd1dc99cd0732ecf\"" May 13 00:23:04.240966 systemd[1]: sshd@14-10.0.0.71:22-10.0.0.1:40788.service: Deactivated successfully. May 13 00:23:04.244416 systemd[1]: session-15.scope: Deactivated successfully. May 13 00:23:04.247582 systemd-logind[1422]: Session 15 logged out. Waiting for processes to exit. May 13 00:23:04.254796 systemd[1]: Started sshd@15-10.0.0.71:22-10.0.0.1:40798.service - OpenSSH per-connection server daemon (10.0.0.1:40798). May 13 00:23:04.256760 systemd-logind[1422]: Removed session 15. May 13 00:23:04.294792 sshd[5138]: Accepted publickey for core from 10.0.0.1 port 40798 ssh2: RSA SHA256:ilwLBGyeejLKSU0doRti0j2W4iQ88Tp+35jhkd0iwiU May 13 00:23:04.296159 sshd[5138]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:23:04.297834 systemd[1]: Started cri-containerd-42f40231a7b97f3cd5b980accb60a61760fafb80fd42c402dd1dc99cd0732ecf.scope - libcontainer container 42f40231a7b97f3cd5b980accb60a61760fafb80fd42c402dd1dc99cd0732ecf. May 13 00:23:04.302234 systemd-logind[1422]: New session 16 of user core. May 13 00:23:04.309815 systemd[1]: Started session-16.scope - Session 16 of User core. May 13 00:23:04.329562 containerd[1446]: time="2025-05-13T00:23:04.329455620Z" level=info msg="StartContainer for \"42f40231a7b97f3cd5b980accb60a61760fafb80fd42c402dd1dc99cd0732ecf\" returns successfully" May 13 00:23:04.601363 sshd[5138]: pam_unix(sshd:session): session closed for user core May 13 00:23:04.611214 systemd[1]: sshd@15-10.0.0.71:22-10.0.0.1:40798.service: Deactivated successfully. May 13 00:23:04.615924 systemd[1]: session-16.scope: Deactivated successfully. May 13 00:23:04.619342 systemd-logind[1422]: Session 16 logged out. Waiting for processes to exit. May 13 00:23:04.636027 systemd[1]: Started sshd@16-10.0.0.71:22-10.0.0.1:40806.service - OpenSSH per-connection server daemon (10.0.0.1:40806). May 13 00:23:04.637269 systemd-logind[1422]: Removed session 16. May 13 00:23:04.683324 sshd[5189]: Accepted publickey for core from 10.0.0.1 port 40806 ssh2: RSA SHA256:ilwLBGyeejLKSU0doRti0j2W4iQ88Tp+35jhkd0iwiU May 13 00:23:04.684192 sshd[5189]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:23:04.688573 systemd-logind[1422]: New session 17 of user core. May 13 00:23:04.697871 systemd[1]: Started session-17.scope - Session 17 of User core. May 13 00:23:04.843830 sshd[5189]: pam_unix(sshd:session): session closed for user core May 13 00:23:04.847373 systemd[1]: sshd@16-10.0.0.71:22-10.0.0.1:40806.service: Deactivated successfully. May 13 00:23:04.850156 systemd[1]: session-17.scope: Deactivated successfully. May 13 00:23:04.850830 systemd-logind[1422]: Session 17 logged out. Waiting for processes to exit. May 13 00:23:04.851826 systemd-logind[1422]: Removed session 17. May 13 00:23:04.918023 kubelet[2539]: I0513 00:23:04.917974 2539 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 13 00:23:05.053991 kubelet[2539]: I0513 00:23:05.053940 2539 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 May 13 00:23:05.053991 kubelet[2539]: I0513 00:23:05.053989 2539 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock May 13 00:23:06.572501 kubelet[2539]: E0513 00:23:06.572223 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:23:06.585358 kubelet[2539]: I0513 00:23:06.585301 2539 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-qpdwm" podStartSLOduration=27.825137969 podStartE2EDuration="31.585285326s" podCreationTimestamp="2025-05-13 00:22:35 +0000 UTC" firstStartedPulling="2025-05-13 00:23:00.41896881 +0000 UTC m=+45.531099549" lastFinishedPulling="2025-05-13 00:23:04.179116167 +0000 UTC m=+49.291246906" observedRunningTime="2025-05-13 00:23:05.177813532 +0000 UTC m=+50.289944311" watchObservedRunningTime="2025-05-13 00:23:06.585285326 +0000 UTC m=+51.697416105" May 13 00:23:09.858270 systemd[1]: Started sshd@17-10.0.0.71:22-10.0.0.1:40808.service - OpenSSH per-connection server daemon (10.0.0.1:40808). May 13 00:23:09.898203 sshd[5280]: Accepted publickey for core from 10.0.0.1 port 40808 ssh2: RSA SHA256:ilwLBGyeejLKSU0doRti0j2W4iQ88Tp+35jhkd0iwiU May 13 00:23:09.899455 sshd[5280]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:23:09.903334 systemd-logind[1422]: New session 18 of user core. May 13 00:23:09.908845 systemd[1]: Started session-18.scope - Session 18 of User core. May 13 00:23:10.045511 sshd[5280]: pam_unix(sshd:session): session closed for user core May 13 00:23:10.048885 systemd[1]: sshd@17-10.0.0.71:22-10.0.0.1:40808.service: Deactivated successfully. May 13 00:23:10.050500 systemd[1]: session-18.scope: Deactivated successfully. May 13 00:23:10.051103 systemd-logind[1422]: Session 18 logged out. Waiting for processes to exit. May 13 00:23:10.052027 systemd-logind[1422]: Removed session 18. May 13 00:23:14.965716 containerd[1446]: time="2025-05-13T00:23:14.965489113Z" level=info msg="StopPodSandbox for \"eab8d741f67b79a2cd6380a374fe623436c2df296fd0ff2ad35066305fd06906\"" May 13 00:23:15.040055 containerd[1446]: 2025-05-13 00:23:15.002 [WARNING][5310] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="eab8d741f67b79a2cd6380a374fe623436c2df296fd0ff2ad35066305fd06906" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--qpdwm-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8de2152a-6cd4-4599-a610-aac788d746cf", ResourceVersion:"1024", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 22, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"14f374d1475e5f44f9fabd68d506b71e1becf97f5300953ac7a1e1747067741c", Pod:"csi-node-driver-qpdwm", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calidbd621ba582", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:23:15.040055 containerd[1446]: 2025-05-13 00:23:15.002 [INFO][5310] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="eab8d741f67b79a2cd6380a374fe623436c2df296fd0ff2ad35066305fd06906" May 13 00:23:15.040055 containerd[1446]: 2025-05-13 00:23:15.002 [INFO][5310] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="eab8d741f67b79a2cd6380a374fe623436c2df296fd0ff2ad35066305fd06906" iface="eth0" netns="" May 13 00:23:15.040055 containerd[1446]: 2025-05-13 00:23:15.002 [INFO][5310] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="eab8d741f67b79a2cd6380a374fe623436c2df296fd0ff2ad35066305fd06906" May 13 00:23:15.040055 containerd[1446]: 2025-05-13 00:23:15.002 [INFO][5310] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="eab8d741f67b79a2cd6380a374fe623436c2df296fd0ff2ad35066305fd06906" May 13 00:23:15.040055 containerd[1446]: 2025-05-13 00:23:15.026 [INFO][5321] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="eab8d741f67b79a2cd6380a374fe623436c2df296fd0ff2ad35066305fd06906" HandleID="k8s-pod-network.eab8d741f67b79a2cd6380a374fe623436c2df296fd0ff2ad35066305fd06906" Workload="localhost-k8s-csi--node--driver--qpdwm-eth0" May 13 00:23:15.040055 containerd[1446]: 2025-05-13 00:23:15.026 [INFO][5321] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:23:15.040055 containerd[1446]: 2025-05-13 00:23:15.026 [INFO][5321] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:23:15.040055 containerd[1446]: 2025-05-13 00:23:15.035 [WARNING][5321] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="eab8d741f67b79a2cd6380a374fe623436c2df296fd0ff2ad35066305fd06906" HandleID="k8s-pod-network.eab8d741f67b79a2cd6380a374fe623436c2df296fd0ff2ad35066305fd06906" Workload="localhost-k8s-csi--node--driver--qpdwm-eth0" May 13 00:23:15.040055 containerd[1446]: 2025-05-13 00:23:15.035 [INFO][5321] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="eab8d741f67b79a2cd6380a374fe623436c2df296fd0ff2ad35066305fd06906" HandleID="k8s-pod-network.eab8d741f67b79a2cd6380a374fe623436c2df296fd0ff2ad35066305fd06906" Workload="localhost-k8s-csi--node--driver--qpdwm-eth0" May 13 00:23:15.040055 containerd[1446]: 2025-05-13 00:23:15.036 [INFO][5321] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:23:15.040055 containerd[1446]: 2025-05-13 00:23:15.038 [INFO][5310] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="eab8d741f67b79a2cd6380a374fe623436c2df296fd0ff2ad35066305fd06906" May 13 00:23:15.041296 containerd[1446]: time="2025-05-13T00:23:15.040089731Z" level=info msg="TearDown network for sandbox \"eab8d741f67b79a2cd6380a374fe623436c2df296fd0ff2ad35066305fd06906\" successfully" May 13 00:23:15.041296 containerd[1446]: time="2025-05-13T00:23:15.040113694Z" level=info msg="StopPodSandbox for \"eab8d741f67b79a2cd6380a374fe623436c2df296fd0ff2ad35066305fd06906\" returns successfully" May 13 00:23:15.041296 containerd[1446]: time="2025-05-13T00:23:15.040607819Z" level=info msg="RemovePodSandbox for \"eab8d741f67b79a2cd6380a374fe623436c2df296fd0ff2ad35066305fd06906\"" May 13 00:23:15.050602 containerd[1446]: time="2025-05-13T00:23:15.050302116Z" level=info msg="Forcibly stopping sandbox \"eab8d741f67b79a2cd6380a374fe623436c2df296fd0ff2ad35066305fd06906\"" May 13 00:23:15.065939 systemd[1]: Started sshd@18-10.0.0.71:22-10.0.0.1:42924.service - OpenSSH per-connection server daemon (10.0.0.1:42924). May 13 00:23:15.104755 sshd[5350]: Accepted publickey for core from 10.0.0.1 port 42924 ssh2: RSA SHA256:ilwLBGyeejLKSU0doRti0j2W4iQ88Tp+35jhkd0iwiU May 13 00:23:15.105537 sshd[5350]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:23:15.110742 systemd-logind[1422]: New session 19 of user core. May 13 00:23:15.119824 systemd[1]: Started session-19.scope - Session 19 of User core. May 13 00:23:15.126340 containerd[1446]: 2025-05-13 00:23:15.088 [WARNING][5345] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="eab8d741f67b79a2cd6380a374fe623436c2df296fd0ff2ad35066305fd06906" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--qpdwm-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8de2152a-6cd4-4599-a610-aac788d746cf", ResourceVersion:"1024", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 22, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"14f374d1475e5f44f9fabd68d506b71e1becf97f5300953ac7a1e1747067741c", Pod:"csi-node-driver-qpdwm", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calidbd621ba582", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:23:15.126340 containerd[1446]: 2025-05-13 00:23:15.089 [INFO][5345] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="eab8d741f67b79a2cd6380a374fe623436c2df296fd0ff2ad35066305fd06906" May 13 00:23:15.126340 containerd[1446]: 2025-05-13 00:23:15.089 [INFO][5345] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="eab8d741f67b79a2cd6380a374fe623436c2df296fd0ff2ad35066305fd06906" iface="eth0" netns="" May 13 00:23:15.126340 containerd[1446]: 2025-05-13 00:23:15.089 [INFO][5345] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="eab8d741f67b79a2cd6380a374fe623436c2df296fd0ff2ad35066305fd06906" May 13 00:23:15.126340 containerd[1446]: 2025-05-13 00:23:15.089 [INFO][5345] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="eab8d741f67b79a2cd6380a374fe623436c2df296fd0ff2ad35066305fd06906" May 13 00:23:15.126340 containerd[1446]: 2025-05-13 00:23:15.109 [INFO][5356] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="eab8d741f67b79a2cd6380a374fe623436c2df296fd0ff2ad35066305fd06906" HandleID="k8s-pod-network.eab8d741f67b79a2cd6380a374fe623436c2df296fd0ff2ad35066305fd06906" Workload="localhost-k8s-csi--node--driver--qpdwm-eth0" May 13 00:23:15.126340 containerd[1446]: 2025-05-13 00:23:15.110 [INFO][5356] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:23:15.126340 containerd[1446]: 2025-05-13 00:23:15.110 [INFO][5356] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:23:15.126340 containerd[1446]: 2025-05-13 00:23:15.119 [WARNING][5356] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="eab8d741f67b79a2cd6380a374fe623436c2df296fd0ff2ad35066305fd06906" HandleID="k8s-pod-network.eab8d741f67b79a2cd6380a374fe623436c2df296fd0ff2ad35066305fd06906" Workload="localhost-k8s-csi--node--driver--qpdwm-eth0" May 13 00:23:15.126340 containerd[1446]: 2025-05-13 00:23:15.119 [INFO][5356] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="eab8d741f67b79a2cd6380a374fe623436c2df296fd0ff2ad35066305fd06906" HandleID="k8s-pod-network.eab8d741f67b79a2cd6380a374fe623436c2df296fd0ff2ad35066305fd06906" Workload="localhost-k8s-csi--node--driver--qpdwm-eth0" May 13 00:23:15.126340 containerd[1446]: 2025-05-13 00:23:15.121 [INFO][5356] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:23:15.126340 containerd[1446]: 2025-05-13 00:23:15.124 [INFO][5345] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="eab8d741f67b79a2cd6380a374fe623436c2df296fd0ff2ad35066305fd06906" May 13 00:23:15.126340 containerd[1446]: time="2025-05-13T00:23:15.126304945Z" level=info msg="TearDown network for sandbox \"eab8d741f67b79a2cd6380a374fe623436c2df296fd0ff2ad35066305fd06906\" successfully" May 13 00:23:15.133939 containerd[1446]: time="2025-05-13T00:23:15.133507211Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"eab8d741f67b79a2cd6380a374fe623436c2df296fd0ff2ad35066305fd06906\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 13 00:23:15.133939 containerd[1446]: time="2025-05-13T00:23:15.133584778Z" level=info msg="RemovePodSandbox \"eab8d741f67b79a2cd6380a374fe623436c2df296fd0ff2ad35066305fd06906\" returns successfully" May 13 00:23:15.134075 containerd[1446]: time="2025-05-13T00:23:15.134054382Z" level=info msg="StopPodSandbox for \"d09c07c414f1fafdc87511e808256d4caa2ec5b0528b87fd5d8ae7e24819a800\"" May 13 00:23:15.209558 containerd[1446]: 2025-05-13 00:23:15.171 [WARNING][5380] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d09c07c414f1fafdc87511e808256d4caa2ec5b0528b87fd5d8ae7e24819a800" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--kpx49-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"5937a833-2917-44dc-be70-b888b2f1c194", ResourceVersion:"896", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 22, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e4a8f8d3d74457761d9fb5da15648c49253793a507320df90876eaff24f43454", Pod:"coredns-7db6d8ff4d-kpx49", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3e060f72f4b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:23:15.209558 containerd[1446]: 2025-05-13 00:23:15.171 [INFO][5380] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d09c07c414f1fafdc87511e808256d4caa2ec5b0528b87fd5d8ae7e24819a800" May 13 00:23:15.209558 containerd[1446]: 2025-05-13 00:23:15.171 [INFO][5380] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d09c07c414f1fafdc87511e808256d4caa2ec5b0528b87fd5d8ae7e24819a800" iface="eth0" netns="" May 13 00:23:15.209558 containerd[1446]: 2025-05-13 00:23:15.171 [INFO][5380] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d09c07c414f1fafdc87511e808256d4caa2ec5b0528b87fd5d8ae7e24819a800" May 13 00:23:15.209558 containerd[1446]: 2025-05-13 00:23:15.171 [INFO][5380] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d09c07c414f1fafdc87511e808256d4caa2ec5b0528b87fd5d8ae7e24819a800" May 13 00:23:15.209558 containerd[1446]: 2025-05-13 00:23:15.193 [INFO][5395] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d09c07c414f1fafdc87511e808256d4caa2ec5b0528b87fd5d8ae7e24819a800" HandleID="k8s-pod-network.d09c07c414f1fafdc87511e808256d4caa2ec5b0528b87fd5d8ae7e24819a800" Workload="localhost-k8s-coredns--7db6d8ff4d--kpx49-eth0" May 13 00:23:15.209558 containerd[1446]: 2025-05-13 00:23:15.193 [INFO][5395] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:23:15.209558 containerd[1446]: 2025-05-13 00:23:15.193 [INFO][5395] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:23:15.209558 containerd[1446]: 2025-05-13 00:23:15.203 [WARNING][5395] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d09c07c414f1fafdc87511e808256d4caa2ec5b0528b87fd5d8ae7e24819a800" HandleID="k8s-pod-network.d09c07c414f1fafdc87511e808256d4caa2ec5b0528b87fd5d8ae7e24819a800" Workload="localhost-k8s-coredns--7db6d8ff4d--kpx49-eth0" May 13 00:23:15.209558 containerd[1446]: 2025-05-13 00:23:15.203 [INFO][5395] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d09c07c414f1fafdc87511e808256d4caa2ec5b0528b87fd5d8ae7e24819a800" HandleID="k8s-pod-network.d09c07c414f1fafdc87511e808256d4caa2ec5b0528b87fd5d8ae7e24819a800" Workload="localhost-k8s-coredns--7db6d8ff4d--kpx49-eth0" May 13 00:23:15.209558 containerd[1446]: 2025-05-13 00:23:15.205 [INFO][5395] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:23:15.209558 containerd[1446]: 2025-05-13 00:23:15.206 [INFO][5380] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d09c07c414f1fafdc87511e808256d4caa2ec5b0528b87fd5d8ae7e24819a800" May 13 00:23:15.210340 containerd[1446]: time="2025-05-13T00:23:15.209589088Z" level=info msg="TearDown network for sandbox \"d09c07c414f1fafdc87511e808256d4caa2ec5b0528b87fd5d8ae7e24819a800\" successfully" May 13 00:23:15.210340 containerd[1446]: time="2025-05-13T00:23:15.209613810Z" level=info msg="StopPodSandbox for \"d09c07c414f1fafdc87511e808256d4caa2ec5b0528b87fd5d8ae7e24819a800\" returns successfully" May 13 00:23:15.210340 containerd[1446]: time="2025-05-13T00:23:15.210179022Z" level=info msg="RemovePodSandbox for \"d09c07c414f1fafdc87511e808256d4caa2ec5b0528b87fd5d8ae7e24819a800\"" May 13 00:23:15.210340 containerd[1446]: time="2025-05-13T00:23:15.210209345Z" level=info msg="Forcibly stopping sandbox \"d09c07c414f1fafdc87511e808256d4caa2ec5b0528b87fd5d8ae7e24819a800\"" May 13 00:23:15.288523 sshd[5350]: pam_unix(sshd:session): session closed for user core May 13 00:23:15.292589 systemd[1]: sshd@18-10.0.0.71:22-10.0.0.1:42924.service: Deactivated successfully. May 13 00:23:15.295050 containerd[1446]: 2025-05-13 00:23:15.249 [WARNING][5418] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d09c07c414f1fafdc87511e808256d4caa2ec5b0528b87fd5d8ae7e24819a800" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--kpx49-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"5937a833-2917-44dc-be70-b888b2f1c194", ResourceVersion:"896", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 22, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e4a8f8d3d74457761d9fb5da15648c49253793a507320df90876eaff24f43454", Pod:"coredns-7db6d8ff4d-kpx49", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3e060f72f4b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:23:15.295050 containerd[1446]: 2025-05-13 00:23:15.249 [INFO][5418] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d09c07c414f1fafdc87511e808256d4caa2ec5b0528b87fd5d8ae7e24819a800" May 13 00:23:15.295050 containerd[1446]: 2025-05-13 00:23:15.249 [INFO][5418] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d09c07c414f1fafdc87511e808256d4caa2ec5b0528b87fd5d8ae7e24819a800" iface="eth0" netns="" May 13 00:23:15.295050 containerd[1446]: 2025-05-13 00:23:15.249 [INFO][5418] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d09c07c414f1fafdc87511e808256d4caa2ec5b0528b87fd5d8ae7e24819a800" May 13 00:23:15.295050 containerd[1446]: 2025-05-13 00:23:15.249 [INFO][5418] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d09c07c414f1fafdc87511e808256d4caa2ec5b0528b87fd5d8ae7e24819a800" May 13 00:23:15.295050 containerd[1446]: 2025-05-13 00:23:15.279 [INFO][5427] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d09c07c414f1fafdc87511e808256d4caa2ec5b0528b87fd5d8ae7e24819a800" HandleID="k8s-pod-network.d09c07c414f1fafdc87511e808256d4caa2ec5b0528b87fd5d8ae7e24819a800" Workload="localhost-k8s-coredns--7db6d8ff4d--kpx49-eth0" May 13 00:23:15.295050 containerd[1446]: 2025-05-13 00:23:15.279 [INFO][5427] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:23:15.295050 containerd[1446]: 2025-05-13 00:23:15.279 [INFO][5427] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:23:15.295050 containerd[1446]: 2025-05-13 00:23:15.286 [WARNING][5427] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d09c07c414f1fafdc87511e808256d4caa2ec5b0528b87fd5d8ae7e24819a800" HandleID="k8s-pod-network.d09c07c414f1fafdc87511e808256d4caa2ec5b0528b87fd5d8ae7e24819a800" Workload="localhost-k8s-coredns--7db6d8ff4d--kpx49-eth0" May 13 00:23:15.295050 containerd[1446]: 2025-05-13 00:23:15.286 [INFO][5427] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d09c07c414f1fafdc87511e808256d4caa2ec5b0528b87fd5d8ae7e24819a800" HandleID="k8s-pod-network.d09c07c414f1fafdc87511e808256d4caa2ec5b0528b87fd5d8ae7e24819a800" Workload="localhost-k8s-coredns--7db6d8ff4d--kpx49-eth0" May 13 00:23:15.295050 containerd[1446]: 2025-05-13 00:23:15.289 [INFO][5427] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:23:15.295050 containerd[1446]: 2025-05-13 00:23:15.291 [INFO][5418] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d09c07c414f1fafdc87511e808256d4caa2ec5b0528b87fd5d8ae7e24819a800" May 13 00:23:15.295562 containerd[1446]: time="2025-05-13T00:23:15.295089595Z" level=info msg="TearDown network for sandbox \"d09c07c414f1fafdc87511e808256d4caa2ec5b0528b87fd5d8ae7e24819a800\" successfully" May 13 00:23:15.296442 systemd[1]: session-19.scope: Deactivated successfully. May 13 00:23:15.297167 systemd-logind[1422]: Session 19 logged out. Waiting for processes to exit. May 13 00:23:15.297786 containerd[1446]: time="2025-05-13T00:23:15.297661793Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d09c07c414f1fafdc87511e808256d4caa2ec5b0528b87fd5d8ae7e24819a800\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 13 00:23:15.297855 containerd[1446]: time="2025-05-13T00:23:15.297833969Z" level=info msg="RemovePodSandbox \"d09c07c414f1fafdc87511e808256d4caa2ec5b0528b87fd5d8ae7e24819a800\" returns successfully" May 13 00:23:15.298169 systemd-logind[1422]: Removed session 19. May 13 00:23:15.298294 containerd[1446]: time="2025-05-13T00:23:15.298273330Z" level=info msg="StopPodSandbox for \"f3151aae7dd241cfa9cb458ba5dfd9e9b570461ce2c93ea539aa3d30677df77c\"" May 13 00:23:15.365472 containerd[1446]: 2025-05-13 00:23:15.332 [WARNING][5453] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f3151aae7dd241cfa9cb458ba5dfd9e9b570461ce2c93ea539aa3d30677df77c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6894c4f4db--g2hb4-eth0", GenerateName:"calico-apiserver-6894c4f4db-", Namespace:"calico-apiserver", SelfLink:"", UID:"775b536a-9cd7-44ec-b10b-8740a8d0f7ab", ResourceVersion:"958", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 22, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6894c4f4db", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d00ded3b14fbd10dfbcb1340c6cc22d0415f507546be047139fec26dabcc70a2", Pod:"calico-apiserver-6894c4f4db-g2hb4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"califd0c9bb2fd7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:23:15.365472 containerd[1446]: 2025-05-13 00:23:15.333 [INFO][5453] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f3151aae7dd241cfa9cb458ba5dfd9e9b570461ce2c93ea539aa3d30677df77c" May 13 00:23:15.365472 containerd[1446]: 2025-05-13 00:23:15.333 [INFO][5453] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f3151aae7dd241cfa9cb458ba5dfd9e9b570461ce2c93ea539aa3d30677df77c" iface="eth0" netns="" May 13 00:23:15.365472 containerd[1446]: 2025-05-13 00:23:15.333 [INFO][5453] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f3151aae7dd241cfa9cb458ba5dfd9e9b570461ce2c93ea539aa3d30677df77c" May 13 00:23:15.365472 containerd[1446]: 2025-05-13 00:23:15.333 [INFO][5453] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f3151aae7dd241cfa9cb458ba5dfd9e9b570461ce2c93ea539aa3d30677df77c" May 13 00:23:15.365472 containerd[1446]: 2025-05-13 00:23:15.351 [INFO][5461] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f3151aae7dd241cfa9cb458ba5dfd9e9b570461ce2c93ea539aa3d30677df77c" HandleID="k8s-pod-network.f3151aae7dd241cfa9cb458ba5dfd9e9b570461ce2c93ea539aa3d30677df77c" Workload="localhost-k8s-calico--apiserver--6894c4f4db--g2hb4-eth0" May 13 00:23:15.365472 containerd[1446]: 2025-05-13 00:23:15.351 [INFO][5461] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:23:15.365472 containerd[1446]: 2025-05-13 00:23:15.351 [INFO][5461] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:23:15.365472 containerd[1446]: 2025-05-13 00:23:15.360 [WARNING][5461] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f3151aae7dd241cfa9cb458ba5dfd9e9b570461ce2c93ea539aa3d30677df77c" HandleID="k8s-pod-network.f3151aae7dd241cfa9cb458ba5dfd9e9b570461ce2c93ea539aa3d30677df77c" Workload="localhost-k8s-calico--apiserver--6894c4f4db--g2hb4-eth0" May 13 00:23:15.365472 containerd[1446]: 2025-05-13 00:23:15.360 [INFO][5461] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f3151aae7dd241cfa9cb458ba5dfd9e9b570461ce2c93ea539aa3d30677df77c" HandleID="k8s-pod-network.f3151aae7dd241cfa9cb458ba5dfd9e9b570461ce2c93ea539aa3d30677df77c" Workload="localhost-k8s-calico--apiserver--6894c4f4db--g2hb4-eth0" May 13 00:23:15.365472 containerd[1446]: 2025-05-13 00:23:15.362 [INFO][5461] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:23:15.365472 containerd[1446]: 2025-05-13 00:23:15.363 [INFO][5453] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f3151aae7dd241cfa9cb458ba5dfd9e9b570461ce2c93ea539aa3d30677df77c" May 13 00:23:15.366026 containerd[1446]: time="2025-05-13T00:23:15.365508068Z" level=info msg="TearDown network for sandbox \"f3151aae7dd241cfa9cb458ba5dfd9e9b570461ce2c93ea539aa3d30677df77c\" successfully" May 13 00:23:15.366026 containerd[1446]: time="2025-05-13T00:23:15.365532470Z" level=info msg="StopPodSandbox for \"f3151aae7dd241cfa9cb458ba5dfd9e9b570461ce2c93ea539aa3d30677df77c\" returns successfully" May 13 00:23:15.366404 containerd[1446]: time="2025-05-13T00:23:15.366383069Z" level=info msg="RemovePodSandbox for \"f3151aae7dd241cfa9cb458ba5dfd9e9b570461ce2c93ea539aa3d30677df77c\"" May 13 00:23:15.366448 containerd[1446]: time="2025-05-13T00:23:15.366413832Z" level=info msg="Forcibly stopping sandbox \"f3151aae7dd241cfa9cb458ba5dfd9e9b570461ce2c93ea539aa3d30677df77c\"" May 13 00:23:15.433689 containerd[1446]: 2025-05-13 00:23:15.402 [WARNING][5483] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f3151aae7dd241cfa9cb458ba5dfd9e9b570461ce2c93ea539aa3d30677df77c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6894c4f4db--g2hb4-eth0", GenerateName:"calico-apiserver-6894c4f4db-", Namespace:"calico-apiserver", SelfLink:"", UID:"775b536a-9cd7-44ec-b10b-8740a8d0f7ab", ResourceVersion:"958", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 22, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6894c4f4db", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d00ded3b14fbd10dfbcb1340c6cc22d0415f507546be047139fec26dabcc70a2", Pod:"calico-apiserver-6894c4f4db-g2hb4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"califd0c9bb2fd7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:23:15.433689 containerd[1446]: 2025-05-13 00:23:15.402 [INFO][5483] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f3151aae7dd241cfa9cb458ba5dfd9e9b570461ce2c93ea539aa3d30677df77c" May 13 00:23:15.433689 containerd[1446]: 2025-05-13 00:23:15.402 [INFO][5483] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f3151aae7dd241cfa9cb458ba5dfd9e9b570461ce2c93ea539aa3d30677df77c" iface="eth0" netns="" May 13 00:23:15.433689 containerd[1446]: 2025-05-13 00:23:15.402 [INFO][5483] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f3151aae7dd241cfa9cb458ba5dfd9e9b570461ce2c93ea539aa3d30677df77c" May 13 00:23:15.433689 containerd[1446]: 2025-05-13 00:23:15.402 [INFO][5483] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f3151aae7dd241cfa9cb458ba5dfd9e9b570461ce2c93ea539aa3d30677df77c" May 13 00:23:15.433689 containerd[1446]: 2025-05-13 00:23:15.421 [INFO][5492] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f3151aae7dd241cfa9cb458ba5dfd9e9b570461ce2c93ea539aa3d30677df77c" HandleID="k8s-pod-network.f3151aae7dd241cfa9cb458ba5dfd9e9b570461ce2c93ea539aa3d30677df77c" Workload="localhost-k8s-calico--apiserver--6894c4f4db--g2hb4-eth0" May 13 00:23:15.433689 containerd[1446]: 2025-05-13 00:23:15.421 [INFO][5492] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:23:15.433689 containerd[1446]: 2025-05-13 00:23:15.421 [INFO][5492] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:23:15.433689 containerd[1446]: 2025-05-13 00:23:15.429 [WARNING][5492] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f3151aae7dd241cfa9cb458ba5dfd9e9b570461ce2c93ea539aa3d30677df77c" HandleID="k8s-pod-network.f3151aae7dd241cfa9cb458ba5dfd9e9b570461ce2c93ea539aa3d30677df77c" Workload="localhost-k8s-calico--apiserver--6894c4f4db--g2hb4-eth0" May 13 00:23:15.433689 containerd[1446]: 2025-05-13 00:23:15.429 [INFO][5492] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f3151aae7dd241cfa9cb458ba5dfd9e9b570461ce2c93ea539aa3d30677df77c" HandleID="k8s-pod-network.f3151aae7dd241cfa9cb458ba5dfd9e9b570461ce2c93ea539aa3d30677df77c" Workload="localhost-k8s-calico--apiserver--6894c4f4db--g2hb4-eth0" May 13 00:23:15.433689 containerd[1446]: 2025-05-13 00:23:15.430 [INFO][5492] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:23:15.433689 containerd[1446]: 2025-05-13 00:23:15.432 [INFO][5483] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f3151aae7dd241cfa9cb458ba5dfd9e9b570461ce2c93ea539aa3d30677df77c" May 13 00:23:15.434123 containerd[1446]: time="2025-05-13T00:23:15.433700655Z" level=info msg="TearDown network for sandbox \"f3151aae7dd241cfa9cb458ba5dfd9e9b570461ce2c93ea539aa3d30677df77c\" successfully" May 13 00:23:15.436758 containerd[1446]: time="2025-05-13T00:23:15.436724414Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f3151aae7dd241cfa9cb458ba5dfd9e9b570461ce2c93ea539aa3d30677df77c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 13 00:23:15.436838 containerd[1446]: time="2025-05-13T00:23:15.436779139Z" level=info msg="RemovePodSandbox \"f3151aae7dd241cfa9cb458ba5dfd9e9b570461ce2c93ea539aa3d30677df77c\" returns successfully" May 13 00:23:15.437226 containerd[1446]: time="2025-05-13T00:23:15.437199658Z" level=info msg="StopPodSandbox for \"bfbcc49e1a3b0f71bf0e010cb7c9fe3095889e7036e2e864e58763235f929478\"" May 13 00:23:15.530177 containerd[1446]: 2025-05-13 00:23:15.471 [WARNING][5515] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="bfbcc49e1a3b0f71bf0e010cb7c9fe3095889e7036e2e864e58763235f929478" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--69855fccf--cpn7n-eth0", GenerateName:"calico-kube-controllers-69855fccf-", Namespace:"calico-system", SelfLink:"", UID:"333d75f2-2d65-4963-bba9-b7a1ce798de8", ResourceVersion:"1019", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 22, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"69855fccf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"be65b4f96d19c3d57aa4344533ec0b2567372b767f7ff3cf7d0df479262e233c", Pod:"calico-kube-controllers-69855fccf-cpn7n", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib6bde19d1d6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:23:15.530177 containerd[1446]: 2025-05-13 00:23:15.471 [INFO][5515] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="bfbcc49e1a3b0f71bf0e010cb7c9fe3095889e7036e2e864e58763235f929478" May 13 00:23:15.530177 containerd[1446]: 2025-05-13 00:23:15.471 [INFO][5515] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bfbcc49e1a3b0f71bf0e010cb7c9fe3095889e7036e2e864e58763235f929478" iface="eth0" netns="" May 13 00:23:15.530177 containerd[1446]: 2025-05-13 00:23:15.471 [INFO][5515] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="bfbcc49e1a3b0f71bf0e010cb7c9fe3095889e7036e2e864e58763235f929478" May 13 00:23:15.530177 containerd[1446]: 2025-05-13 00:23:15.471 [INFO][5515] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bfbcc49e1a3b0f71bf0e010cb7c9fe3095889e7036e2e864e58763235f929478" May 13 00:23:15.530177 containerd[1446]: 2025-05-13 00:23:15.499 [INFO][5523] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bfbcc49e1a3b0f71bf0e010cb7c9fe3095889e7036e2e864e58763235f929478" HandleID="k8s-pod-network.bfbcc49e1a3b0f71bf0e010cb7c9fe3095889e7036e2e864e58763235f929478" Workload="localhost-k8s-calico--kube--controllers--69855fccf--cpn7n-eth0" May 13 00:23:15.530177 containerd[1446]: 2025-05-13 00:23:15.499 [INFO][5523] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:23:15.530177 containerd[1446]: 2025-05-13 00:23:15.499 [INFO][5523] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:23:15.530177 containerd[1446]: 2025-05-13 00:23:15.510 [WARNING][5523] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bfbcc49e1a3b0f71bf0e010cb7c9fe3095889e7036e2e864e58763235f929478" HandleID="k8s-pod-network.bfbcc49e1a3b0f71bf0e010cb7c9fe3095889e7036e2e864e58763235f929478" Workload="localhost-k8s-calico--kube--controllers--69855fccf--cpn7n-eth0" May 13 00:23:15.530177 containerd[1446]: 2025-05-13 00:23:15.510 [INFO][5523] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bfbcc49e1a3b0f71bf0e010cb7c9fe3095889e7036e2e864e58763235f929478" HandleID="k8s-pod-network.bfbcc49e1a3b0f71bf0e010cb7c9fe3095889e7036e2e864e58763235f929478" Workload="localhost-k8s-calico--kube--controllers--69855fccf--cpn7n-eth0" May 13 00:23:15.530177 containerd[1446]: 2025-05-13 00:23:15.526 [INFO][5523] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:23:15.530177 containerd[1446]: 2025-05-13 00:23:15.528 [INFO][5515] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="bfbcc49e1a3b0f71bf0e010cb7c9fe3095889e7036e2e864e58763235f929478" May 13 00:23:15.530707 containerd[1446]: time="2025-05-13T00:23:15.530207500Z" level=info msg="TearDown network for sandbox \"bfbcc49e1a3b0f71bf0e010cb7c9fe3095889e7036e2e864e58763235f929478\" successfully" May 13 00:23:15.530707 containerd[1446]: time="2025-05-13T00:23:15.530232743Z" level=info msg="StopPodSandbox for \"bfbcc49e1a3b0f71bf0e010cb7c9fe3095889e7036e2e864e58763235f929478\" returns successfully" May 13 00:23:15.531172 containerd[1446]: time="2025-05-13T00:23:15.531147507Z" level=info msg="RemovePodSandbox for \"bfbcc49e1a3b0f71bf0e010cb7c9fe3095889e7036e2e864e58763235f929478\"" May 13 00:23:15.531222 containerd[1446]: time="2025-05-13T00:23:15.531180790Z" level=info msg="Forcibly stopping sandbox \"bfbcc49e1a3b0f71bf0e010cb7c9fe3095889e7036e2e864e58763235f929478\"" May 13 00:23:15.600329 containerd[1446]: 2025-05-13 00:23:15.565 [WARNING][5546] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="bfbcc49e1a3b0f71bf0e010cb7c9fe3095889e7036e2e864e58763235f929478" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--69855fccf--cpn7n-eth0", GenerateName:"calico-kube-controllers-69855fccf-", Namespace:"calico-system", SelfLink:"", UID:"333d75f2-2d65-4963-bba9-b7a1ce798de8", ResourceVersion:"1019", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 22, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"69855fccf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"be65b4f96d19c3d57aa4344533ec0b2567372b767f7ff3cf7d0df479262e233c", Pod:"calico-kube-controllers-69855fccf-cpn7n", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib6bde19d1d6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:23:15.600329 containerd[1446]: 2025-05-13 00:23:15.566 [INFO][5546] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="bfbcc49e1a3b0f71bf0e010cb7c9fe3095889e7036e2e864e58763235f929478" May 13 00:23:15.600329 containerd[1446]: 2025-05-13 00:23:15.566 [INFO][5546] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bfbcc49e1a3b0f71bf0e010cb7c9fe3095889e7036e2e864e58763235f929478" iface="eth0" netns="" May 13 00:23:15.600329 containerd[1446]: 2025-05-13 00:23:15.566 [INFO][5546] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="bfbcc49e1a3b0f71bf0e010cb7c9fe3095889e7036e2e864e58763235f929478" May 13 00:23:15.600329 containerd[1446]: 2025-05-13 00:23:15.566 [INFO][5546] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bfbcc49e1a3b0f71bf0e010cb7c9fe3095889e7036e2e864e58763235f929478" May 13 00:23:15.600329 containerd[1446]: 2025-05-13 00:23:15.585 [INFO][5556] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bfbcc49e1a3b0f71bf0e010cb7c9fe3095889e7036e2e864e58763235f929478" HandleID="k8s-pod-network.bfbcc49e1a3b0f71bf0e010cb7c9fe3095889e7036e2e864e58763235f929478" Workload="localhost-k8s-calico--kube--controllers--69855fccf--cpn7n-eth0" May 13 00:23:15.600329 containerd[1446]: 2025-05-13 00:23:15.585 [INFO][5556] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:23:15.600329 containerd[1446]: 2025-05-13 00:23:15.585 [INFO][5556] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:23:15.600329 containerd[1446]: 2025-05-13 00:23:15.595 [WARNING][5556] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bfbcc49e1a3b0f71bf0e010cb7c9fe3095889e7036e2e864e58763235f929478" HandleID="k8s-pod-network.bfbcc49e1a3b0f71bf0e010cb7c9fe3095889e7036e2e864e58763235f929478" Workload="localhost-k8s-calico--kube--controllers--69855fccf--cpn7n-eth0" May 13 00:23:15.600329 containerd[1446]: 2025-05-13 00:23:15.595 [INFO][5556] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bfbcc49e1a3b0f71bf0e010cb7c9fe3095889e7036e2e864e58763235f929478" HandleID="k8s-pod-network.bfbcc49e1a3b0f71bf0e010cb7c9fe3095889e7036e2e864e58763235f929478" Workload="localhost-k8s-calico--kube--controllers--69855fccf--cpn7n-eth0" May 13 00:23:15.600329 containerd[1446]: 2025-05-13 00:23:15.596 [INFO][5556] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:23:15.600329 containerd[1446]: 2025-05-13 00:23:15.598 [INFO][5546] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="bfbcc49e1a3b0f71bf0e010cb7c9fe3095889e7036e2e864e58763235f929478" May 13 00:23:15.600329 containerd[1446]: time="2025-05-13T00:23:15.600284181Z" level=info msg="TearDown network for sandbox \"bfbcc49e1a3b0f71bf0e010cb7c9fe3095889e7036e2e864e58763235f929478\" successfully" May 13 00:23:15.605388 containerd[1446]: time="2025-05-13T00:23:15.605323807Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"bfbcc49e1a3b0f71bf0e010cb7c9fe3095889e7036e2e864e58763235f929478\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 13 00:23:15.605489 containerd[1446]: time="2025-05-13T00:23:15.605424097Z" level=info msg="RemovePodSandbox \"bfbcc49e1a3b0f71bf0e010cb7c9fe3095889e7036e2e864e58763235f929478\" returns successfully" May 13 00:23:15.606067 containerd[1446]: time="2025-05-13T00:23:15.606038394Z" level=info msg="StopPodSandbox for \"0385badbc46d7545171daec8abaf6e071928490fc792fb9972d614929747c54b\"" May 13 00:23:15.673086 containerd[1446]: 2025-05-13 00:23:15.641 [WARNING][5583] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0385badbc46d7545171daec8abaf6e071928490fc792fb9972d614929747c54b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--xcx4r-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"a4d0cfcc-1589-4e5f-bdf4-ebe5ae5fb30f", ResourceVersion:"941", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 22, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b30bec14e4d5000eb5693d1cb72086bad62218a7b8f362542e33498cdc8e3de0", Pod:"coredns-7db6d8ff4d-xcx4r", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic1f51f62b57", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:23:15.673086 containerd[1446]: 2025-05-13 00:23:15.641 [INFO][5583] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="0385badbc46d7545171daec8abaf6e071928490fc792fb9972d614929747c54b" May 13 00:23:15.673086 containerd[1446]: 2025-05-13 00:23:15.641 [INFO][5583] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0385badbc46d7545171daec8abaf6e071928490fc792fb9972d614929747c54b" iface="eth0" netns="" May 13 00:23:15.673086 containerd[1446]: 2025-05-13 00:23:15.641 [INFO][5583] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="0385badbc46d7545171daec8abaf6e071928490fc792fb9972d614929747c54b" May 13 00:23:15.673086 containerd[1446]: 2025-05-13 00:23:15.641 [INFO][5583] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0385badbc46d7545171daec8abaf6e071928490fc792fb9972d614929747c54b" May 13 00:23:15.673086 containerd[1446]: 2025-05-13 00:23:15.660 [INFO][5594] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0385badbc46d7545171daec8abaf6e071928490fc792fb9972d614929747c54b" HandleID="k8s-pod-network.0385badbc46d7545171daec8abaf6e071928490fc792fb9972d614929747c54b" Workload="localhost-k8s-coredns--7db6d8ff4d--xcx4r-eth0" May 13 00:23:15.673086 containerd[1446]: 2025-05-13 00:23:15.661 [INFO][5594] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:23:15.673086 containerd[1446]: 2025-05-13 00:23:15.661 [INFO][5594] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:23:15.673086 containerd[1446]: 2025-05-13 00:23:15.668 [WARNING][5594] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0385badbc46d7545171daec8abaf6e071928490fc792fb9972d614929747c54b" HandleID="k8s-pod-network.0385badbc46d7545171daec8abaf6e071928490fc792fb9972d614929747c54b" Workload="localhost-k8s-coredns--7db6d8ff4d--xcx4r-eth0" May 13 00:23:15.673086 containerd[1446]: 2025-05-13 00:23:15.668 [INFO][5594] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0385badbc46d7545171daec8abaf6e071928490fc792fb9972d614929747c54b" HandleID="k8s-pod-network.0385badbc46d7545171daec8abaf6e071928490fc792fb9972d614929747c54b" Workload="localhost-k8s-coredns--7db6d8ff4d--xcx4r-eth0" May 13 00:23:15.673086 containerd[1446]: 2025-05-13 00:23:15.669 [INFO][5594] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:23:15.673086 containerd[1446]: 2025-05-13 00:23:15.671 [INFO][5583] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="0385badbc46d7545171daec8abaf6e071928490fc792fb9972d614929747c54b" May 13 00:23:15.673500 containerd[1446]: time="2025-05-13T00:23:15.673115637Z" level=info msg="TearDown network for sandbox \"0385badbc46d7545171daec8abaf6e071928490fc792fb9972d614929747c54b\" successfully" May 13 00:23:15.673500 containerd[1446]: time="2025-05-13T00:23:15.673140560Z" level=info msg="StopPodSandbox for \"0385badbc46d7545171daec8abaf6e071928490fc792fb9972d614929747c54b\" returns successfully" May 13 00:23:15.673590 containerd[1446]: time="2025-05-13T00:23:15.673559958Z" level=info msg="RemovePodSandbox for \"0385badbc46d7545171daec8abaf6e071928490fc792fb9972d614929747c54b\"" May 13 00:23:15.673629 containerd[1446]: time="2025-05-13T00:23:15.673598242Z" level=info msg="Forcibly stopping sandbox \"0385badbc46d7545171daec8abaf6e071928490fc792fb9972d614929747c54b\"" May 13 00:23:15.741746 containerd[1446]: 2025-05-13 00:23:15.709 [WARNING][5617] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0385badbc46d7545171daec8abaf6e071928490fc792fb9972d614929747c54b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--xcx4r-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"a4d0cfcc-1589-4e5f-bdf4-ebe5ae5fb30f", ResourceVersion:"941", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 22, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b30bec14e4d5000eb5693d1cb72086bad62218a7b8f362542e33498cdc8e3de0", Pod:"coredns-7db6d8ff4d-xcx4r", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic1f51f62b57", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:23:15.741746 containerd[1446]: 2025-05-13 00:23:15.709 [INFO][5617] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="0385badbc46d7545171daec8abaf6e071928490fc792fb9972d614929747c54b" May 13 00:23:15.741746 containerd[1446]: 2025-05-13 00:23:15.709 [INFO][5617] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0385badbc46d7545171daec8abaf6e071928490fc792fb9972d614929747c54b" iface="eth0" netns="" May 13 00:23:15.741746 containerd[1446]: 2025-05-13 00:23:15.709 [INFO][5617] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="0385badbc46d7545171daec8abaf6e071928490fc792fb9972d614929747c54b" May 13 00:23:15.741746 containerd[1446]: 2025-05-13 00:23:15.709 [INFO][5617] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0385badbc46d7545171daec8abaf6e071928490fc792fb9972d614929747c54b" May 13 00:23:15.741746 containerd[1446]: 2025-05-13 00:23:15.728 [INFO][5625] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0385badbc46d7545171daec8abaf6e071928490fc792fb9972d614929747c54b" HandleID="k8s-pod-network.0385badbc46d7545171daec8abaf6e071928490fc792fb9972d614929747c54b" Workload="localhost-k8s-coredns--7db6d8ff4d--xcx4r-eth0" May 13 00:23:15.741746 containerd[1446]: 2025-05-13 00:23:15.728 [INFO][5625] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:23:15.741746 containerd[1446]: 2025-05-13 00:23:15.728 [INFO][5625] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:23:15.741746 containerd[1446]: 2025-05-13 00:23:15.736 [WARNING][5625] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0385badbc46d7545171daec8abaf6e071928490fc792fb9972d614929747c54b" HandleID="k8s-pod-network.0385badbc46d7545171daec8abaf6e071928490fc792fb9972d614929747c54b" Workload="localhost-k8s-coredns--7db6d8ff4d--xcx4r-eth0" May 13 00:23:15.741746 containerd[1446]: 2025-05-13 00:23:15.736 [INFO][5625] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0385badbc46d7545171daec8abaf6e071928490fc792fb9972d614929747c54b" HandleID="k8s-pod-network.0385badbc46d7545171daec8abaf6e071928490fc792fb9972d614929747c54b" Workload="localhost-k8s-coredns--7db6d8ff4d--xcx4r-eth0" May 13 00:23:15.741746 containerd[1446]: 2025-05-13 00:23:15.737 [INFO][5625] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:23:15.741746 containerd[1446]: 2025-05-13 00:23:15.739 [INFO][5617] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="0385badbc46d7545171daec8abaf6e071928490fc792fb9972d614929747c54b" May 13 00:23:15.741746 containerd[1446]: time="2025-05-13T00:23:15.741636054Z" level=info msg="TearDown network for sandbox \"0385badbc46d7545171daec8abaf6e071928490fc792fb9972d614929747c54b\" successfully" May 13 00:23:15.755571 containerd[1446]: time="2025-05-13T00:23:15.755534580Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0385badbc46d7545171daec8abaf6e071928490fc792fb9972d614929747c54b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 13 00:23:15.755639 containerd[1446]: time="2025-05-13T00:23:15.755599226Z" level=info msg="RemovePodSandbox \"0385badbc46d7545171daec8abaf6e071928490fc792fb9972d614929747c54b\" returns successfully" May 13 00:23:15.756031 containerd[1446]: time="2025-05-13T00:23:15.756006223Z" level=info msg="StopPodSandbox for \"87e79ddd483e5624ff50ef59ae827cf47ccadb3749c204e75b7230f340a69d07\"" May 13 00:23:15.822810 containerd[1446]: 2025-05-13 00:23:15.790 [WARNING][5647] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="87e79ddd483e5624ff50ef59ae827cf47ccadb3749c204e75b7230f340a69d07" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6894c4f4db--vq79h-eth0", GenerateName:"calico-apiserver-6894c4f4db-", Namespace:"calico-apiserver", SelfLink:"", UID:"d1cea400-e767-4428-8f90-26704f1d5213", ResourceVersion:"934", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 22, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6894c4f4db", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1aea659f2333cc3508c8bad46df7fabfb716877504940d38e9b3655eb0167acf", Pod:"calico-apiserver-6894c4f4db-vq79h", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1f9be9524c3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:23:15.822810 containerd[1446]: 2025-05-13 00:23:15.790 [INFO][5647] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="87e79ddd483e5624ff50ef59ae827cf47ccadb3749c204e75b7230f340a69d07" May 13 00:23:15.822810 containerd[1446]: 2025-05-13 00:23:15.790 [INFO][5647] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="87e79ddd483e5624ff50ef59ae827cf47ccadb3749c204e75b7230f340a69d07" iface="eth0" netns="" May 13 00:23:15.822810 containerd[1446]: 2025-05-13 00:23:15.790 [INFO][5647] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="87e79ddd483e5624ff50ef59ae827cf47ccadb3749c204e75b7230f340a69d07" May 13 00:23:15.822810 containerd[1446]: 2025-05-13 00:23:15.790 [INFO][5647] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="87e79ddd483e5624ff50ef59ae827cf47ccadb3749c204e75b7230f340a69d07" May 13 00:23:15.822810 containerd[1446]: 2025-05-13 00:23:15.809 [INFO][5655] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="87e79ddd483e5624ff50ef59ae827cf47ccadb3749c204e75b7230f340a69d07" HandleID="k8s-pod-network.87e79ddd483e5624ff50ef59ae827cf47ccadb3749c204e75b7230f340a69d07" Workload="localhost-k8s-calico--apiserver--6894c4f4db--vq79h-eth0" May 13 00:23:15.822810 containerd[1446]: 2025-05-13 00:23:15.809 [INFO][5655] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:23:15.822810 containerd[1446]: 2025-05-13 00:23:15.809 [INFO][5655] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:23:15.822810 containerd[1446]: 2025-05-13 00:23:15.816 [WARNING][5655] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="87e79ddd483e5624ff50ef59ae827cf47ccadb3749c204e75b7230f340a69d07" HandleID="k8s-pod-network.87e79ddd483e5624ff50ef59ae827cf47ccadb3749c204e75b7230f340a69d07" Workload="localhost-k8s-calico--apiserver--6894c4f4db--vq79h-eth0" May 13 00:23:15.822810 containerd[1446]: 2025-05-13 00:23:15.816 [INFO][5655] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="87e79ddd483e5624ff50ef59ae827cf47ccadb3749c204e75b7230f340a69d07" HandleID="k8s-pod-network.87e79ddd483e5624ff50ef59ae827cf47ccadb3749c204e75b7230f340a69d07" Workload="localhost-k8s-calico--apiserver--6894c4f4db--vq79h-eth0" May 13 00:23:15.822810 containerd[1446]: 2025-05-13 00:23:15.818 [INFO][5655] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:23:15.822810 containerd[1446]: 2025-05-13 00:23:15.821 [INFO][5647] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="87e79ddd483e5624ff50ef59ae827cf47ccadb3749c204e75b7230f340a69d07" May 13 00:23:15.823215 containerd[1446]: time="2025-05-13T00:23:15.822840845Z" level=info msg="TearDown network for sandbox \"87e79ddd483e5624ff50ef59ae827cf47ccadb3749c204e75b7230f340a69d07\" successfully" May 13 00:23:15.823215 containerd[1446]: time="2025-05-13T00:23:15.822865247Z" level=info msg="StopPodSandbox for \"87e79ddd483e5624ff50ef59ae827cf47ccadb3749c204e75b7230f340a69d07\" returns successfully" May 13 00:23:15.825679 containerd[1446]: time="2025-05-13T00:23:15.825241587Z" level=info msg="RemovePodSandbox for \"87e79ddd483e5624ff50ef59ae827cf47ccadb3749c204e75b7230f340a69d07\"" May 13 00:23:15.825679 containerd[1446]: time="2025-05-13T00:23:15.825277350Z" level=info msg="Forcibly stopping sandbox \"87e79ddd483e5624ff50ef59ae827cf47ccadb3749c204e75b7230f340a69d07\"" May 13 00:23:15.892814 containerd[1446]: 2025-05-13 00:23:15.860 [WARNING][5677] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="87e79ddd483e5624ff50ef59ae827cf47ccadb3749c204e75b7230f340a69d07" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6894c4f4db--vq79h-eth0", GenerateName:"calico-apiserver-6894c4f4db-", Namespace:"calico-apiserver", SelfLink:"", UID:"d1cea400-e767-4428-8f90-26704f1d5213", ResourceVersion:"934", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 22, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6894c4f4db", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1aea659f2333cc3508c8bad46df7fabfb716877504940d38e9b3655eb0167acf", Pod:"calico-apiserver-6894c4f4db-vq79h", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1f9be9524c3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:23:15.892814 containerd[1446]: 2025-05-13 00:23:15.860 [INFO][5677] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="87e79ddd483e5624ff50ef59ae827cf47ccadb3749c204e75b7230f340a69d07" May 13 00:23:15.892814 containerd[1446]: 2025-05-13 00:23:15.860 [INFO][5677] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="87e79ddd483e5624ff50ef59ae827cf47ccadb3749c204e75b7230f340a69d07" iface="eth0" netns="" May 13 00:23:15.892814 containerd[1446]: 2025-05-13 00:23:15.860 [INFO][5677] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="87e79ddd483e5624ff50ef59ae827cf47ccadb3749c204e75b7230f340a69d07" May 13 00:23:15.892814 containerd[1446]: 2025-05-13 00:23:15.860 [INFO][5677] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="87e79ddd483e5624ff50ef59ae827cf47ccadb3749c204e75b7230f340a69d07" May 13 00:23:15.892814 containerd[1446]: 2025-05-13 00:23:15.879 [INFO][5685] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="87e79ddd483e5624ff50ef59ae827cf47ccadb3749c204e75b7230f340a69d07" HandleID="k8s-pod-network.87e79ddd483e5624ff50ef59ae827cf47ccadb3749c204e75b7230f340a69d07" Workload="localhost-k8s-calico--apiserver--6894c4f4db--vq79h-eth0" May 13 00:23:15.892814 containerd[1446]: 2025-05-13 00:23:15.879 [INFO][5685] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:23:15.892814 containerd[1446]: 2025-05-13 00:23:15.879 [INFO][5685] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:23:15.892814 containerd[1446]: 2025-05-13 00:23:15.888 [WARNING][5685] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="87e79ddd483e5624ff50ef59ae827cf47ccadb3749c204e75b7230f340a69d07" HandleID="k8s-pod-network.87e79ddd483e5624ff50ef59ae827cf47ccadb3749c204e75b7230f340a69d07" Workload="localhost-k8s-calico--apiserver--6894c4f4db--vq79h-eth0" May 13 00:23:15.892814 containerd[1446]: 2025-05-13 00:23:15.888 [INFO][5685] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="87e79ddd483e5624ff50ef59ae827cf47ccadb3749c204e75b7230f340a69d07" HandleID="k8s-pod-network.87e79ddd483e5624ff50ef59ae827cf47ccadb3749c204e75b7230f340a69d07" Workload="localhost-k8s-calico--apiserver--6894c4f4db--vq79h-eth0" May 13 00:23:15.892814 containerd[1446]: 2025-05-13 00:23:15.889 [INFO][5685] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:23:15.892814 containerd[1446]: 2025-05-13 00:23:15.891 [INFO][5677] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="87e79ddd483e5624ff50ef59ae827cf47ccadb3749c204e75b7230f340a69d07" May 13 00:23:15.892814 containerd[1446]: time="2025-05-13T00:23:15.892787074Z" level=info msg="TearDown network for sandbox \"87e79ddd483e5624ff50ef59ae827cf47ccadb3749c204e75b7230f340a69d07\" successfully" May 13 00:23:15.895415 containerd[1446]: time="2025-05-13T00:23:15.895377673Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"87e79ddd483e5624ff50ef59ae827cf47ccadb3749c204e75b7230f340a69d07\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 13 00:23:15.895517 containerd[1446]: time="2025-05-13T00:23:15.895448440Z" level=info msg="RemovePodSandbox \"87e79ddd483e5624ff50ef59ae827cf47ccadb3749c204e75b7230f340a69d07\" returns successfully" May 13 00:23:18.067711 kubelet[2539]: I0513 00:23:18.067359 2539 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 13 00:23:20.300318 systemd[1]: Started sshd@19-10.0.0.71:22-10.0.0.1:42934.service - OpenSSH per-connection server daemon (10.0.0.1:42934). May 13 00:23:20.343024 sshd[5696]: Accepted publickey for core from 10.0.0.1 port 42934 ssh2: RSA SHA256:ilwLBGyeejLKSU0doRti0j2W4iQ88Tp+35jhkd0iwiU May 13 00:23:20.344401 sshd[5696]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:23:20.348739 systemd-logind[1422]: New session 20 of user core. May 13 00:23:20.359841 systemd[1]: Started session-20.scope - Session 20 of User core. May 13 00:23:20.483416 sshd[5696]: pam_unix(sshd:session): session closed for user core May 13 00:23:20.487233 systemd[1]: sshd@19-10.0.0.71:22-10.0.0.1:42934.service: Deactivated successfully. May 13 00:23:20.488938 systemd[1]: session-20.scope: Deactivated successfully. May 13 00:23:20.491603 systemd-logind[1422]: Session 20 logged out. Waiting for processes to exit. May 13 00:23:20.492547 systemd-logind[1422]: Removed session 20.